00:00:00.001 Started by upstream project "autotest-nightly" build number 3795 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3175 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:03.079 The recommended git tool is: git 00:00:03.079 using credential 00000000-0000-0000-0000-000000000002 00:00:03.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:03.089 Fetching changes from the remote Git repository 00:00:03.090 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:03.100 Using shallow fetch with depth 1 00:00:03.100 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:03.100 > git --version # timeout=10 00:00:03.110 > git --version # 'git version 2.39.2' 00:00:03.110 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:03.122 Setting http proxy: proxy-dmz.intel.com:911 00:00:03.122 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:14.116 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:14.128 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:14.138 Checking out Revision ea7646cba2e992b05bb6a53407de7fbcf465b5c6 (FETCH_HEAD) 00:00:14.138 > git config core.sparsecheckout # timeout=10 00:00:14.151 > git read-tree -mu HEAD # timeout=10 00:00:14.166 > git checkout -f ea7646cba2e992b05bb6a53407de7fbcf465b5c6 # timeout=5 00:00:14.190 Commit message: "ansible/inventory: Fix GP16's BMC address" 00:00:14.190 > git rev-list --no-walk fcd93e2ba68418fb72075306675cd28d3d4f53d6 # timeout=10 00:00:14.298 [Pipeline] Start of Pipeline 00:00:14.309 [Pipeline] library 00:00:14.310 Loading library shm_lib@master 00:00:14.310 Library shm_lib@master is cached. Copying from home. 00:00:14.328 [Pipeline] node 00:00:14.339 Running on WFP10 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:14.341 [Pipeline] { 00:00:14.351 [Pipeline] catchError 00:00:14.352 [Pipeline] { 00:00:14.366 [Pipeline] wrap 00:00:14.378 [Pipeline] { 00:00:14.387 [Pipeline] stage 00:00:14.389 [Pipeline] { (Prologue) 00:00:14.576 [Pipeline] sh 00:00:14.858 + logger -p user.info -t JENKINS-CI 00:00:14.878 [Pipeline] echo 00:00:14.880 Node: WFP10 00:00:14.888 [Pipeline] sh 00:00:15.189 [Pipeline] setCustomBuildProperty 00:00:15.200 [Pipeline] echo 00:00:15.201 Cleanup processes 00:00:15.205 [Pipeline] sh 00:00:15.484 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:15.484 3310493 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:15.495 [Pipeline] sh 00:00:15.773 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:15.773 ++ grep -v 'sudo pgrep' 00:00:15.773 ++ awk '{print $1}' 00:00:15.773 + sudo kill -9 00:00:15.773 + true 00:00:15.786 [Pipeline] cleanWs 00:00:15.795 [WS-CLEANUP] Deleting project workspace... 00:00:15.795 [WS-CLEANUP] Deferred wipeout is used... 00:00:15.801 [WS-CLEANUP] done 00:00:15.806 [Pipeline] setCustomBuildProperty 00:00:15.820 [Pipeline] sh 00:00:16.099 + sudo git config --global --replace-all safe.directory '*' 00:00:16.169 [Pipeline] nodesByLabel 00:00:16.170 Found a total of 2 nodes with the 'sorcerer' label 00:00:16.176 [Pipeline] httpRequest 00:00:16.180 HttpMethod: GET 00:00:16.181 URL: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:16.185 Sending request to url: http://10.211.164.101/packages/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:16.198 Response Code: HTTP/1.1 200 OK 00:00:16.199 Success: Status code 200 is in the accepted range: 200,404 00:00:16.199 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:26.220 [Pipeline] sh 00:00:26.503 + tar --no-same-owner -xf jbp_ea7646cba2e992b05bb6a53407de7fbcf465b5c6.tar.gz 00:00:26.522 [Pipeline] httpRequest 00:00:26.527 HttpMethod: GET 00:00:26.528 URL: http://10.211.164.101/packages/spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:00:26.528 Sending request to url: http://10.211.164.101/packages/spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:00:26.547 Response Code: HTTP/1.1 200 OK 00:00:26.548 Success: Status code 200 is in the accepted range: 200,404 00:00:26.548 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:01:53.537 [Pipeline] sh 00:01:53.848 + tar --no-same-owner -xf spdk_9ccef490756ae81d8533533981ce3becef66b7e9.tar.gz 00:01:58.052 [Pipeline] sh 00:01:58.334 + git -C spdk log --oneline -n5 00:01:58.334 9ccef4907 nvme/tcp: fix seq failure handling 00:01:58.334 2a268d7a6 nvme/tcp: move logic from safe ver of req complete 00:01:58.334 8531a41f9 nvme/tcp: add util to cond schedule qpair poll 00:01:58.334 b10f50b08 scripts/pkgdep: Add pkg-config package to {rhel,debian}-based distros 00:01:58.334 89d49f772 pkgdep/debian: Handle PEP 668 00:01:58.346 [Pipeline] } 00:01:58.364 [Pipeline] // stage 00:01:58.373 [Pipeline] stage 00:01:58.375 [Pipeline] { (Prepare) 00:01:58.393 [Pipeline] writeFile 00:01:58.411 [Pipeline] sh 00:01:58.693 + logger -p user.info -t JENKINS-CI 00:01:58.705 [Pipeline] sh 00:01:58.988 + logger -p user.info -t JENKINS-CI 00:01:59.005 [Pipeline] sh 00:01:59.287 + cat autorun-spdk.conf 00:01:59.287 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.287 SPDK_TEST_FUZZER_SHORT=1 00:01:59.287 SPDK_TEST_FUZZER=1 00:01:59.287 SPDK_RUN_UBSAN=1 00:01:59.294 RUN_NIGHTLY=1 00:01:59.299 [Pipeline] readFile 00:01:59.324 [Pipeline] withEnv 00:01:59.326 [Pipeline] { 00:01:59.340 [Pipeline] sh 00:01:59.643 + set -ex 00:01:59.643 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:59.643 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:59.643 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.643 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:59.643 ++ SPDK_TEST_FUZZER=1 00:01:59.643 ++ SPDK_RUN_UBSAN=1 00:01:59.643 ++ RUN_NIGHTLY=1 00:01:59.643 + case $SPDK_TEST_NVMF_NICS in 00:01:59.643 + DRIVERS= 00:01:59.643 + [[ -n '' ]] 00:01:59.643 + exit 0 00:01:59.651 [Pipeline] } 00:01:59.668 [Pipeline] // withEnv 00:01:59.674 [Pipeline] } 00:01:59.691 [Pipeline] // stage 00:01:59.701 [Pipeline] catchError 00:01:59.703 [Pipeline] { 00:01:59.717 [Pipeline] timeout 00:01:59.717 Timeout set to expire in 30 min 00:01:59.719 [Pipeline] { 00:01:59.732 [Pipeline] stage 00:01:59.733 [Pipeline] { (Tests) 00:01:59.746 [Pipeline] sh 00:02:00.026 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:00.026 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:00.026 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:02:00.026 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:02:00.026 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:00.026 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:00.026 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:02:00.026 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:00.026 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:02:00.026 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:02:00.026 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:02:00.026 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:02:00.026 + source /etc/os-release 00:02:00.026 ++ NAME='Fedora Linux' 00:02:00.027 ++ VERSION='38 (Cloud Edition)' 00:02:00.027 ++ ID=fedora 00:02:00.027 ++ VERSION_ID=38 00:02:00.027 ++ VERSION_CODENAME= 00:02:00.027 ++ PLATFORM_ID=platform:f38 00:02:00.027 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:00.027 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:00.027 ++ LOGO=fedora-logo-icon 00:02:00.027 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:00.027 ++ HOME_URL=https://fedoraproject.org/ 00:02:00.027 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:00.027 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:00.027 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:00.027 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:00.027 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:00.027 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:00.027 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:00.027 ++ SUPPORT_END=2024-05-14 00:02:00.027 ++ VARIANT='Cloud Edition' 00:02:00.027 ++ VARIANT_ID=cloud 00:02:00.027 + uname -a 00:02:00.027 Linux spdk-wfp-10 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:00.027 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:02:02.561 Hugepages 00:02:02.561 node hugesize free / total 00:02:02.561 node0 1048576kB 0 / 0 00:02:02.561 node0 2048kB 0 / 0 00:02:02.561 node1 1048576kB 0 / 0 00:02:02.561 node1 2048kB 0 / 0 00:02:02.561 00:02:02.561 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:02.561 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:02.561 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:02.561 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:02.561 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:02.561 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:02.561 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:02.561 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:02.561 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:02.561 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme1 nvme1n1 00:02:02.820 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:02.820 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:02.820 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:02.820 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:02.820 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:02.820 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:02.820 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:02.820 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:02.820 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:02.820 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme2 nvme2n1 00:02:02.820 + rm -f /tmp/spdk-ld-path 00:02:02.820 + source autorun-spdk.conf 00:02:02.820 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.820 ++ SPDK_TEST_FUZZER_SHORT=1 00:02:02.820 ++ SPDK_TEST_FUZZER=1 00:02:02.820 ++ SPDK_RUN_UBSAN=1 00:02:02.820 ++ RUN_NIGHTLY=1 00:02:02.820 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:02.820 + [[ -n '' ]] 00:02:02.820 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:02.820 + for M in /var/spdk/build-*-manifest.txt 00:02:02.820 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:02.820 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:02:02.820 + for M in /var/spdk/build-*-manifest.txt 00:02:02.820 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:02.820 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:02:02.820 ++ uname 00:02:02.820 + [[ Linux == \L\i\n\u\x ]] 00:02:02.820 + sudo dmesg -T 00:02:02.820 + sudo dmesg --clear 00:02:02.820 + dmesg_pid=3311504 00:02:02.820 + [[ Fedora Linux == FreeBSD ]] 00:02:02.820 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.820 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.820 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:02.820 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:02.820 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:02.820 + [[ -x /usr/src/fio-static/fio ]] 00:02:02.820 + export FIO_BIN=/usr/src/fio-static/fio 00:02:02.820 + FIO_BIN=/usr/src/fio-static/fio 00:02:02.820 + sudo dmesg -Tw 00:02:02.820 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:02.820 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:02.820 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:02.820 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.820 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.820 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:02.820 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.820 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.820 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:02:02.820 Test configuration: 00:02:02.820 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.820 SPDK_TEST_FUZZER_SHORT=1 00:02:02.820 SPDK_TEST_FUZZER=1 00:02:02.820 SPDK_RUN_UBSAN=1 00:02:03.079 RUN_NIGHTLY=1 13:29:55 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:03.079 13:29:55 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:03.079 13:29:55 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.079 13:29:55 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.079 13:29:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.079 13:29:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.079 13:29:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.079 13:29:55 -- paths/export.sh@5 -- $ export PATH 00:02:03.079 13:29:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.079 13:29:55 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:03.079 13:29:55 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:03.079 13:29:55 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718105395.XXXXXX 00:02:03.079 13:29:55 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718105395.AIPQUA 00:02:03.079 13:29:55 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:03.079 13:29:55 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:02:03.079 13:29:55 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:02:03.079 13:29:55 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:03.079 13:29:55 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:03.079 13:29:55 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:03.079 13:29:55 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:03.079 13:29:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.079 13:29:55 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:03.079 13:29:55 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:03.079 13:29:55 -- pm/common@17 -- $ local monitor 00:02:03.079 13:29:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.079 13:29:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.079 13:29:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.079 13:29:55 -- pm/common@21 -- $ date +%s 00:02:03.079 13:29:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.079 13:29:55 -- pm/common@21 -- $ date +%s 00:02:03.079 13:29:55 -- pm/common@25 -- $ sleep 1 00:02:03.079 13:29:55 -- pm/common@21 -- $ date +%s 00:02:03.079 13:29:55 -- pm/common@21 -- $ date +%s 00:02:03.079 13:29:55 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105395 00:02:03.079 13:29:55 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105395 00:02:03.079 13:29:55 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105395 00:02:03.079 13:29:55 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1718105395 00:02:03.079 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105395_collect-vmstat.pm.log 00:02:03.079 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105395_collect-cpu-load.pm.log 00:02:03.079 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105395_collect-cpu-temp.pm.log 00:02:03.079 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1718105395_collect-bmc-pm.bmc.pm.log 00:02:04.016 13:29:56 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:04.016 13:29:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:04.016 13:29:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:04.016 13:29:56 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:04.016 13:29:56 -- spdk/autobuild.sh@16 -- $ date -u 00:02:04.016 Tue Jun 11 11:29:56 AM UTC 2024 00:02:04.016 13:29:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:04.016 v24.09-pre-65-g9ccef4907 00:02:04.016 13:29:56 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:04.016 13:29:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:04.016 13:29:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:04.016 13:29:56 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:04.016 13:29:56 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:04.016 13:29:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.016 ************************************ 00:02:04.016 START TEST ubsan 00:02:04.016 ************************************ 00:02:04.016 13:29:56 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:02:04.016 using ubsan 00:02:04.016 00:02:04.016 real 0m0.000s 00:02:04.016 user 0m0.000s 00:02:04.016 sys 0m0.000s 00:02:04.016 13:29:56 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:04.016 13:29:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:04.016 ************************************ 00:02:04.016 END TEST ubsan 00:02:04.016 ************************************ 00:02:04.016 13:29:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:04.016 13:29:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:04.016 13:29:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:04.016 13:29:56 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:02:04.016 13:29:56 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:02:04.016 13:29:56 -- common/autobuild_common.sh@425 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:02:04.016 13:29:56 -- common/autotest_common.sh@1100 -- $ '[' 2 -le 1 ']' 00:02:04.016 13:29:56 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:04.016 13:29:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.274 ************************************ 00:02:04.274 START TEST autobuild_llvm_precompile 00:02:04.274 ************************************ 00:02:04.274 13:29:56 autobuild_llvm_precompile -- common/autotest_common.sh@1124 -- $ _llvm_precompile 00:02:04.274 13:29:56 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:02:04.274 13:29:56 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:02:04.274 Target: x86_64-redhat-linux-gnu 00:02:04.274 Thread model: posix 00:02:04.274 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:02:04.275 13:29:56 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=16 00:02:04.275 13:29:56 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:02:04.275 13:29:56 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:02:04.275 13:29:56 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:02:04.275 13:29:56 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:02:04.275 13:29:57 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:02:04.275 13:29:57 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:04.275 13:29:57 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:02:04.275 13:29:57 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:02:04.275 13:29:57 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:04.533 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:04.533 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:05.100 Using 'verbs' RDMA provider 00:02:20.914 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:33.116 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:33.116 Creating mk/config.mk...done. 00:02:33.116 Creating mk/cc.flags.mk...done. 00:02:33.116 Type 'make' to build. 00:02:33.116 00:02:33.116 real 0m28.946s 00:02:33.116 user 0m14.694s 00:02:33.116 sys 0m13.247s 00:02:33.116 13:30:25 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:02:33.116 13:30:25 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:02:33.116 ************************************ 00:02:33.116 END TEST autobuild_llvm_precompile 00:02:33.116 ************************************ 00:02:33.116 13:30:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:33.116 13:30:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:33.116 13:30:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:33.116 13:30:25 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:02:33.116 13:30:25 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:02:33.375 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:02:33.375 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:33.941 Using 'verbs' RDMA provider 00:02:47.079 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:59.354 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:59.354 Creating mk/config.mk...done. 00:02:59.354 Creating mk/cc.flags.mk...done. 00:02:59.354 Type 'make' to build. 00:02:59.354 13:30:51 -- spdk/autobuild.sh@69 -- $ run_test make make -j88 00:02:59.354 13:30:51 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:02:59.354 13:30:51 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:02:59.354 13:30:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.354 ************************************ 00:02:59.354 START TEST make 00:02:59.354 ************************************ 00:02:59.354 13:30:51 make -- common/autotest_common.sh@1124 -- $ make -j88 00:02:59.354 make[1]: Nothing to be done for 'all'. 00:03:01.260 The Meson build system 00:03:01.260 Version: 1.3.1 00:03:01.260 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:03:01.260 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:01.260 Build type: native build 00:03:01.260 Project name: libvfio-user 00:03:01.260 Project version: 0.0.1 00:03:01.260 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:03:01.260 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:03:01.260 Host machine cpu family: x86_64 00:03:01.260 Host machine cpu: x86_64 00:03:01.260 Run-time dependency threads found: YES 00:03:01.260 Library dl found: YES 00:03:01.260 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:01.260 Run-time dependency json-c found: YES 0.17 00:03:01.260 Run-time dependency cmocka found: YES 1.1.7 00:03:01.260 Program pytest-3 found: NO 00:03:01.260 Program flake8 found: NO 00:03:01.260 Program misspell-fixer found: NO 00:03:01.260 Program restructuredtext-lint found: NO 00:03:01.260 Program valgrind found: YES (/usr/bin/valgrind) 00:03:01.260 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:01.260 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:01.260 Compiler for C supports arguments -Wwrite-strings: YES 00:03:01.260 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:01.260 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:01.260 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:01.260 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:01.260 Build targets in project: 8 00:03:01.260 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:01.260 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:01.260 00:03:01.260 libvfio-user 0.0.1 00:03:01.260 00:03:01.260 User defined options 00:03:01.260 buildtype : debug 00:03:01.260 default_library: static 00:03:01.260 libdir : /usr/local/lib 00:03:01.260 00:03:01.260 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:01.519 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:01.519 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:03:01.519 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:01.519 [3/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:01.519 [4/36] Compiling C object samples/null.p/null.c.o 00:03:01.519 [5/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:01.519 [6/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:03:01.519 [7/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:01.519 [8/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:03:01.519 [9/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:01.519 [10/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:01.519 [11/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:03:01.519 [12/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:03:01.519 [13/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:01.519 [14/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:01.519 [15/36] Compiling C object test/unit_tests.p/mocks.c.o 00:03:01.519 [16/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:01.519 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:03:01.519 [18/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:03:01.519 [19/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:01.519 [20/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:03:01.519 [21/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:01.519 [22/36] Compiling C object samples/server.p/server.c.o 00:03:01.519 [23/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:01.519 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:01.519 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:01.519 [26/36] Compiling C object samples/client.p/client.c.o 00:03:01.519 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:03:01.519 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:01.777 [29/36] Linking target samples/client 00:03:01.777 [30/36] Linking static target lib/libvfio-user.a 00:03:01.777 [31/36] Linking target test/unit_tests 00:03:01.777 [32/36] Linking target samples/shadow_ioeventfd_server 00:03:01.777 [33/36] Linking target samples/server 00:03:01.777 [34/36] Linking target samples/null 00:03:01.777 [35/36] Linking target samples/gpio-pci-idio-16 00:03:01.777 [36/36] Linking target samples/lspci 00:03:01.777 INFO: autodetecting backend as ninja 00:03:01.777 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:01.777 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:02.343 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:02.343 ninja: no work to do. 00:03:08.900 The Meson build system 00:03:08.900 Version: 1.3.1 00:03:08.900 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:03:08.900 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:03:08.900 Build type: native build 00:03:08.900 Program cat found: YES (/usr/bin/cat) 00:03:08.900 Project name: DPDK 00:03:08.900 Project version: 24.03.0 00:03:08.900 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:03:08.900 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:03:08.900 Host machine cpu family: x86_64 00:03:08.900 Host machine cpu: x86_64 00:03:08.900 Message: ## Building in Developer Mode ## 00:03:08.900 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:08.900 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:08.900 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:08.900 Program python3 found: YES (/usr/bin/python3) 00:03:08.900 Program cat found: YES (/usr/bin/cat) 00:03:08.900 Compiler for C supports arguments -march=native: YES 00:03:08.900 Checking for size of "void *" : 8 00:03:08.900 Checking for size of "void *" : 8 (cached) 00:03:08.900 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:08.900 Library m found: YES 00:03:08.900 Library numa found: YES 00:03:08.900 Has header "numaif.h" : YES 00:03:08.900 Library fdt found: NO 00:03:08.900 Library execinfo found: NO 00:03:08.900 Has header "execinfo.h" : YES 00:03:08.901 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:08.901 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:08.901 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:08.901 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:08.901 Run-time dependency openssl found: YES 3.0.9 00:03:08.901 Run-time dependency libpcap found: YES 1.10.4 00:03:08.901 Has header "pcap.h" with dependency libpcap: YES 00:03:08.901 Compiler for C supports arguments -Wcast-qual: YES 00:03:08.901 Compiler for C supports arguments -Wdeprecated: YES 00:03:08.901 Compiler for C supports arguments -Wformat: YES 00:03:08.901 Compiler for C supports arguments -Wformat-nonliteral: YES 00:03:08.901 Compiler for C supports arguments -Wformat-security: YES 00:03:08.901 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:08.901 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:08.901 Compiler for C supports arguments -Wnested-externs: YES 00:03:08.901 Compiler for C supports arguments -Wold-style-definition: YES 00:03:08.901 Compiler for C supports arguments -Wpointer-arith: YES 00:03:08.901 Compiler for C supports arguments -Wsign-compare: YES 00:03:08.901 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:08.901 Compiler for C supports arguments -Wundef: YES 00:03:08.901 Compiler for C supports arguments -Wwrite-strings: YES 00:03:08.901 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:08.901 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:03:08.901 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:08.901 Program objdump found: YES (/usr/bin/objdump) 00:03:08.901 Compiler for C supports arguments -mavx512f: YES 00:03:08.901 Checking if "AVX512 checking" compiles: YES 00:03:08.901 Fetching value of define "__SSE4_2__" : 1 00:03:08.901 Fetching value of define "__AES__" : 1 00:03:08.901 Fetching value of define "__AVX__" : 1 00:03:08.901 Fetching value of define "__AVX2__" : 1 00:03:08.901 Fetching value of define "__AVX512BW__" : 1 00:03:08.901 Fetching value of define "__AVX512CD__" : 1 00:03:08.901 Fetching value of define "__AVX512DQ__" : 1 00:03:08.901 Fetching value of define "__AVX512F__" : 1 00:03:08.901 Fetching value of define "__AVX512VL__" : 1 00:03:08.901 Fetching value of define "__PCLMUL__" : 1 00:03:08.901 Fetching value of define "__RDRND__" : 1 00:03:08.901 Fetching value of define "__RDSEED__" : 1 00:03:08.901 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:08.901 Fetching value of define "__znver1__" : (undefined) 00:03:08.901 Fetching value of define "__znver2__" : (undefined) 00:03:08.901 Fetching value of define "__znver3__" : (undefined) 00:03:08.901 Fetching value of define "__znver4__" : (undefined) 00:03:08.901 Compiler for C supports arguments -Wno-format-truncation: NO 00:03:08.901 Message: lib/log: Defining dependency "log" 00:03:08.901 Message: lib/kvargs: Defining dependency "kvargs" 00:03:08.901 Message: lib/telemetry: Defining dependency "telemetry" 00:03:08.901 Checking for function "getentropy" : NO 00:03:08.901 Message: lib/eal: Defining dependency "eal" 00:03:08.901 Message: lib/ring: Defining dependency "ring" 00:03:08.901 Message: lib/rcu: Defining dependency "rcu" 00:03:08.901 Message: lib/mempool: Defining dependency "mempool" 00:03:08.901 Message: lib/mbuf: Defining dependency "mbuf" 00:03:08.901 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:08.901 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.901 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.901 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:08.901 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:08.901 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:08.901 Compiler for C supports arguments -mpclmul: YES 00:03:08.901 Compiler for C supports arguments -maes: YES 00:03:08.901 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:08.901 Compiler for C supports arguments -mavx512bw: YES 00:03:08.901 Compiler for C supports arguments -mavx512dq: YES 00:03:08.901 Compiler for C supports arguments -mavx512vl: YES 00:03:08.901 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:08.901 Compiler for C supports arguments -mavx2: YES 00:03:08.901 Compiler for C supports arguments -mavx: YES 00:03:08.901 Message: lib/net: Defining dependency "net" 00:03:08.901 Message: lib/meter: Defining dependency "meter" 00:03:08.901 Message: lib/ethdev: Defining dependency "ethdev" 00:03:08.901 Message: lib/pci: Defining dependency "pci" 00:03:08.901 Message: lib/cmdline: Defining dependency "cmdline" 00:03:08.901 Message: lib/hash: Defining dependency "hash" 00:03:08.901 Message: lib/timer: Defining dependency "timer" 00:03:08.901 Message: lib/compressdev: Defining dependency "compressdev" 00:03:08.901 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:08.901 Message: lib/dmadev: Defining dependency "dmadev" 00:03:08.901 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:08.901 Message: lib/power: Defining dependency "power" 00:03:08.901 Message: lib/reorder: Defining dependency "reorder" 00:03:08.901 Message: lib/security: Defining dependency "security" 00:03:08.901 Has header "linux/userfaultfd.h" : YES 00:03:08.901 Has header "linux/vduse.h" : YES 00:03:08.901 Message: lib/vhost: Defining dependency "vhost" 00:03:08.901 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:03:08.901 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:08.901 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:08.901 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:08.901 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:08.901 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:08.901 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:08.901 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:08.901 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:08.901 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:08.901 Program doxygen found: YES (/usr/bin/doxygen) 00:03:08.901 Configuring doxy-api-html.conf using configuration 00:03:08.901 Configuring doxy-api-man.conf using configuration 00:03:08.901 Program mandb found: YES (/usr/bin/mandb) 00:03:08.901 Program sphinx-build found: NO 00:03:08.901 Configuring rte_build_config.h using configuration 00:03:08.901 Message: 00:03:08.901 ================= 00:03:08.901 Applications Enabled 00:03:08.901 ================= 00:03:08.901 00:03:08.901 apps: 00:03:08.901 00:03:08.901 00:03:08.901 Message: 00:03:08.901 ================= 00:03:08.901 Libraries Enabled 00:03:08.901 ================= 00:03:08.901 00:03:08.901 libs: 00:03:08.901 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:08.901 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:08.901 cryptodev, dmadev, power, reorder, security, vhost, 00:03:08.901 00:03:08.901 Message: 00:03:08.901 =============== 00:03:08.901 Drivers Enabled 00:03:08.901 =============== 00:03:08.901 00:03:08.901 common: 00:03:08.901 00:03:08.901 bus: 00:03:08.901 pci, vdev, 00:03:08.901 mempool: 00:03:08.901 ring, 00:03:08.901 dma: 00:03:08.901 00:03:08.901 net: 00:03:08.901 00:03:08.901 crypto: 00:03:08.901 00:03:08.901 compress: 00:03:08.901 00:03:08.901 vdpa: 00:03:08.901 00:03:08.901 00:03:08.901 Message: 00:03:08.901 ================= 00:03:08.901 Content Skipped 00:03:08.901 ================= 00:03:08.901 00:03:08.901 apps: 00:03:08.901 dumpcap: explicitly disabled via build config 00:03:08.901 graph: explicitly disabled via build config 00:03:08.901 pdump: explicitly disabled via build config 00:03:08.901 proc-info: explicitly disabled via build config 00:03:08.901 test-acl: explicitly disabled via build config 00:03:08.901 test-bbdev: explicitly disabled via build config 00:03:08.901 test-cmdline: explicitly disabled via build config 00:03:08.901 test-compress-perf: explicitly disabled via build config 00:03:08.901 test-crypto-perf: explicitly disabled via build config 00:03:08.901 test-dma-perf: explicitly disabled via build config 00:03:08.901 test-eventdev: explicitly disabled via build config 00:03:08.901 test-fib: explicitly disabled via build config 00:03:08.901 test-flow-perf: explicitly disabled via build config 00:03:08.901 test-gpudev: explicitly disabled via build config 00:03:08.901 test-mldev: explicitly disabled via build config 00:03:08.901 test-pipeline: explicitly disabled via build config 00:03:08.901 test-pmd: explicitly disabled via build config 00:03:08.901 test-regex: explicitly disabled via build config 00:03:08.901 test-sad: explicitly disabled via build config 00:03:08.901 test-security-perf: explicitly disabled via build config 00:03:08.901 00:03:08.901 libs: 00:03:08.901 argparse: explicitly disabled via build config 00:03:08.901 metrics: explicitly disabled via build config 00:03:08.901 acl: explicitly disabled via build config 00:03:08.901 bbdev: explicitly disabled via build config 00:03:08.901 bitratestats: explicitly disabled via build config 00:03:08.901 bpf: explicitly disabled via build config 00:03:08.901 cfgfile: explicitly disabled via build config 00:03:08.901 distributor: explicitly disabled via build config 00:03:08.901 efd: explicitly disabled via build config 00:03:08.901 eventdev: explicitly disabled via build config 00:03:08.901 dispatcher: explicitly disabled via build config 00:03:08.901 gpudev: explicitly disabled via build config 00:03:08.901 gro: explicitly disabled via build config 00:03:08.901 gso: explicitly disabled via build config 00:03:08.901 ip_frag: explicitly disabled via build config 00:03:08.901 jobstats: explicitly disabled via build config 00:03:08.901 latencystats: explicitly disabled via build config 00:03:08.901 lpm: explicitly disabled via build config 00:03:08.901 member: explicitly disabled via build config 00:03:08.901 pcapng: explicitly disabled via build config 00:03:08.901 rawdev: explicitly disabled via build config 00:03:08.901 regexdev: explicitly disabled via build config 00:03:08.901 mldev: explicitly disabled via build config 00:03:08.902 rib: explicitly disabled via build config 00:03:08.902 sched: explicitly disabled via build config 00:03:08.902 stack: explicitly disabled via build config 00:03:08.902 ipsec: explicitly disabled via build config 00:03:08.902 pdcp: explicitly disabled via build config 00:03:08.902 fib: explicitly disabled via build config 00:03:08.902 port: explicitly disabled via build config 00:03:08.902 pdump: explicitly disabled via build config 00:03:08.902 table: explicitly disabled via build config 00:03:08.902 pipeline: explicitly disabled via build config 00:03:08.902 graph: explicitly disabled via build config 00:03:08.902 node: explicitly disabled via build config 00:03:08.902 00:03:08.902 drivers: 00:03:08.902 common/cpt: not in enabled drivers build config 00:03:08.902 common/dpaax: not in enabled drivers build config 00:03:08.902 common/iavf: not in enabled drivers build config 00:03:08.902 common/idpf: not in enabled drivers build config 00:03:08.902 common/ionic: not in enabled drivers build config 00:03:08.902 common/mvep: not in enabled drivers build config 00:03:08.902 common/octeontx: not in enabled drivers build config 00:03:08.902 bus/auxiliary: not in enabled drivers build config 00:03:08.902 bus/cdx: not in enabled drivers build config 00:03:08.902 bus/dpaa: not in enabled drivers build config 00:03:08.902 bus/fslmc: not in enabled drivers build config 00:03:08.902 bus/ifpga: not in enabled drivers build config 00:03:08.902 bus/platform: not in enabled drivers build config 00:03:08.902 bus/uacce: not in enabled drivers build config 00:03:08.902 bus/vmbus: not in enabled drivers build config 00:03:08.902 common/cnxk: not in enabled drivers build config 00:03:08.902 common/mlx5: not in enabled drivers build config 00:03:08.902 common/nfp: not in enabled drivers build config 00:03:08.902 common/nitrox: not in enabled drivers build config 00:03:08.902 common/qat: not in enabled drivers build config 00:03:08.902 common/sfc_efx: not in enabled drivers build config 00:03:08.902 mempool/bucket: not in enabled drivers build config 00:03:08.902 mempool/cnxk: not in enabled drivers build config 00:03:08.902 mempool/dpaa: not in enabled drivers build config 00:03:08.902 mempool/dpaa2: not in enabled drivers build config 00:03:08.902 mempool/octeontx: not in enabled drivers build config 00:03:08.902 mempool/stack: not in enabled drivers build config 00:03:08.902 dma/cnxk: not in enabled drivers build config 00:03:08.902 dma/dpaa: not in enabled drivers build config 00:03:08.902 dma/dpaa2: not in enabled drivers build config 00:03:08.902 dma/hisilicon: not in enabled drivers build config 00:03:08.902 dma/idxd: not in enabled drivers build config 00:03:08.902 dma/ioat: not in enabled drivers build config 00:03:08.902 dma/skeleton: not in enabled drivers build config 00:03:08.902 net/af_packet: not in enabled drivers build config 00:03:08.902 net/af_xdp: not in enabled drivers build config 00:03:08.902 net/ark: not in enabled drivers build config 00:03:08.902 net/atlantic: not in enabled drivers build config 00:03:08.902 net/avp: not in enabled drivers build config 00:03:08.902 net/axgbe: not in enabled drivers build config 00:03:08.902 net/bnx2x: not in enabled drivers build config 00:03:08.902 net/bnxt: not in enabled drivers build config 00:03:08.902 net/bonding: not in enabled drivers build config 00:03:08.902 net/cnxk: not in enabled drivers build config 00:03:08.902 net/cpfl: not in enabled drivers build config 00:03:08.902 net/cxgbe: not in enabled drivers build config 00:03:08.902 net/dpaa: not in enabled drivers build config 00:03:08.902 net/dpaa2: not in enabled drivers build config 00:03:08.902 net/e1000: not in enabled drivers build config 00:03:08.902 net/ena: not in enabled drivers build config 00:03:08.902 net/enetc: not in enabled drivers build config 00:03:08.902 net/enetfec: not in enabled drivers build config 00:03:08.902 net/enic: not in enabled drivers build config 00:03:08.902 net/failsafe: not in enabled drivers build config 00:03:08.902 net/fm10k: not in enabled drivers build config 00:03:08.902 net/gve: not in enabled drivers build config 00:03:08.902 net/hinic: not in enabled drivers build config 00:03:08.902 net/hns3: not in enabled drivers build config 00:03:08.902 net/i40e: not in enabled drivers build config 00:03:08.902 net/iavf: not in enabled drivers build config 00:03:08.902 net/ice: not in enabled drivers build config 00:03:08.902 net/idpf: not in enabled drivers build config 00:03:08.902 net/igc: not in enabled drivers build config 00:03:08.902 net/ionic: not in enabled drivers build config 00:03:08.902 net/ipn3ke: not in enabled drivers build config 00:03:08.902 net/ixgbe: not in enabled drivers build config 00:03:08.902 net/mana: not in enabled drivers build config 00:03:08.902 net/memif: not in enabled drivers build config 00:03:08.902 net/mlx4: not in enabled drivers build config 00:03:08.902 net/mlx5: not in enabled drivers build config 00:03:08.902 net/mvneta: not in enabled drivers build config 00:03:08.902 net/mvpp2: not in enabled drivers build config 00:03:08.902 net/netvsc: not in enabled drivers build config 00:03:08.902 net/nfb: not in enabled drivers build config 00:03:08.902 net/nfp: not in enabled drivers build config 00:03:08.902 net/ngbe: not in enabled drivers build config 00:03:08.902 net/null: not in enabled drivers build config 00:03:08.902 net/octeontx: not in enabled drivers build config 00:03:08.902 net/octeon_ep: not in enabled drivers build config 00:03:08.902 net/pcap: not in enabled drivers build config 00:03:08.902 net/pfe: not in enabled drivers build config 00:03:08.902 net/qede: not in enabled drivers build config 00:03:08.902 net/ring: not in enabled drivers build config 00:03:08.902 net/sfc: not in enabled drivers build config 00:03:08.902 net/softnic: not in enabled drivers build config 00:03:08.902 net/tap: not in enabled drivers build config 00:03:08.902 net/thunderx: not in enabled drivers build config 00:03:08.902 net/txgbe: not in enabled drivers build config 00:03:08.902 net/vdev_netvsc: not in enabled drivers build config 00:03:08.902 net/vhost: not in enabled drivers build config 00:03:08.902 net/virtio: not in enabled drivers build config 00:03:08.902 net/vmxnet3: not in enabled drivers build config 00:03:08.902 raw/*: missing internal dependency, "rawdev" 00:03:08.902 crypto/armv8: not in enabled drivers build config 00:03:08.902 crypto/bcmfs: not in enabled drivers build config 00:03:08.902 crypto/caam_jr: not in enabled drivers build config 00:03:08.902 crypto/ccp: not in enabled drivers build config 00:03:08.902 crypto/cnxk: not in enabled drivers build config 00:03:08.902 crypto/dpaa_sec: not in enabled drivers build config 00:03:08.902 crypto/dpaa2_sec: not in enabled drivers build config 00:03:08.902 crypto/ipsec_mb: not in enabled drivers build config 00:03:08.902 crypto/mlx5: not in enabled drivers build config 00:03:08.902 crypto/mvsam: not in enabled drivers build config 00:03:08.902 crypto/nitrox: not in enabled drivers build config 00:03:08.902 crypto/null: not in enabled drivers build config 00:03:08.902 crypto/octeontx: not in enabled drivers build config 00:03:08.902 crypto/openssl: not in enabled drivers build config 00:03:08.902 crypto/scheduler: not in enabled drivers build config 00:03:08.902 crypto/uadk: not in enabled drivers build config 00:03:08.902 crypto/virtio: not in enabled drivers build config 00:03:08.902 compress/isal: not in enabled drivers build config 00:03:08.902 compress/mlx5: not in enabled drivers build config 00:03:08.902 compress/nitrox: not in enabled drivers build config 00:03:08.902 compress/octeontx: not in enabled drivers build config 00:03:08.902 compress/zlib: not in enabled drivers build config 00:03:08.902 regex/*: missing internal dependency, "regexdev" 00:03:08.902 ml/*: missing internal dependency, "mldev" 00:03:08.902 vdpa/ifc: not in enabled drivers build config 00:03:08.902 vdpa/mlx5: not in enabled drivers build config 00:03:08.902 vdpa/nfp: not in enabled drivers build config 00:03:08.902 vdpa/sfc: not in enabled drivers build config 00:03:08.902 event/*: missing internal dependency, "eventdev" 00:03:08.902 baseband/*: missing internal dependency, "bbdev" 00:03:08.902 gpu/*: missing internal dependency, "gpudev" 00:03:08.902 00:03:08.902 00:03:08.902 Build targets in project: 85 00:03:08.902 00:03:08.902 DPDK 24.03.0 00:03:08.902 00:03:08.902 User defined options 00:03:08.902 buildtype : debug 00:03:08.902 default_library : static 00:03:08.902 libdir : lib 00:03:08.902 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:03:08.902 c_args : -fPIC -Werror 00:03:08.902 c_link_args : 00:03:08.902 cpu_instruction_set: native 00:03:08.902 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:03:08.902 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:03:08.902 enable_docs : false 00:03:08.902 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:08.902 enable_kmods : false 00:03:08.902 tests : false 00:03:08.902 00:03:08.902 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:09.470 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:03:09.470 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:09.470 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:09.470 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:09.470 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:09.470 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:09.470 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:09.470 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:09.470 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:09.470 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:09.470 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:09.470 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:09.470 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:09.470 [13/268] Linking static target lib/librte_kvargs.a 00:03:09.470 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:09.470 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:09.470 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:09.470 [17/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:09.470 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:09.470 [19/268] Linking static target lib/librte_log.a 00:03:10.045 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:10.045 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:10.045 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:10.045 [23/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:10.045 [24/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:10.045 [25/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:10.045 [26/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:10.045 [27/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.045 [28/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:10.045 [29/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:10.045 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:10.045 [31/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:10.045 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:10.045 [33/268] Linking static target lib/librte_pci.a 00:03:10.045 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:10.045 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:10.045 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:10.045 [37/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:10.045 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:10.045 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:10.045 [40/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:10.045 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:10.045 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:10.045 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:10.045 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:10.045 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:10.045 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:10.045 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:10.045 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:10.045 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:10.045 [50/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:10.045 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:10.045 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:10.045 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:10.045 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:10.045 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:10.045 [56/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:10.045 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:10.045 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:10.045 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:10.045 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:10.045 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:10.045 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:10.045 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:10.303 [64/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:10.303 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:10.303 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:10.303 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:10.303 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:10.303 [69/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:10.303 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:10.303 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:10.303 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:10.303 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:10.303 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:10.303 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:10.303 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:10.303 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:10.303 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:10.303 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:10.303 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:10.303 [81/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:10.303 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:10.303 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:10.303 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:10.303 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:10.303 [86/268] Linking static target lib/librte_telemetry.a 00:03:10.303 [87/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:10.303 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:10.303 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:10.303 [90/268] Linking static target lib/librte_meter.a 00:03:10.303 [91/268] Linking static target lib/librte_ring.a 00:03:10.303 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:10.303 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:10.303 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:10.303 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:10.303 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:10.303 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:10.303 [98/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:10.303 [99/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:10.303 [100/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:10.303 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:10.303 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:10.303 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.303 [104/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:10.303 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.303 [106/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.303 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:10.303 [108/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:10.303 [109/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:10.303 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:10.303 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:10.303 [112/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.303 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:10.303 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:10.303 [115/268] Linking static target lib/librte_net.a 00:03:10.303 [116/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:10.303 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:10.303 [118/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:10.303 [119/268] Linking target lib/librte_log.so.24.1 00:03:10.303 [120/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:10.303 [121/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.303 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:10.303 [123/268] Linking static target lib/librte_rcu.a 00:03:10.303 [124/268] Linking static target lib/librte_eal.a 00:03:10.303 [125/268] Linking static target lib/librte_mempool.a 00:03:10.303 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:10.303 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:10.562 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:10.562 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:10.562 [130/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:10.562 [131/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.562 [132/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:10.562 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:10.563 [134/268] Linking static target lib/librte_mbuf.a 00:03:10.563 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.563 [136/268] Linking target lib/librte_kvargs.so.24.1 00:03:10.563 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:10.563 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:10.563 [139/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.563 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:10.821 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:10.821 [142/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:10.821 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:10.821 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:10.821 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.821 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:10.821 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:10.821 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:10.821 [149/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:10.821 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:10.821 [151/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:10.821 [152/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.821 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:10.821 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:10.821 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:10.821 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:10.821 [157/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:10.821 [158/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:10.821 [159/268] Linking target lib/librte_telemetry.so.24.1 00:03:10.821 [160/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:10.821 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:10.821 [162/268] Linking static target lib/librte_reorder.a 00:03:10.821 [163/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:10.821 [164/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:10.821 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:10.821 [166/268] Linking static target lib/librte_security.a 00:03:10.821 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:10.821 [168/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:10.821 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:10.821 [170/268] Linking static target lib/librte_cmdline.a 00:03:10.821 [171/268] Linking static target lib/librte_timer.a 00:03:10.821 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:10.821 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:10.821 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:10.821 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:11.080 [176/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:11.080 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:11.080 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:11.080 [179/268] Linking static target lib/librte_compressdev.a 00:03:11.080 [180/268] Linking static target lib/librte_power.a 00:03:11.080 [181/268] Linking static target lib/librte_dmadev.a 00:03:11.080 [182/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:11.080 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:11.080 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:11.080 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:11.080 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:11.080 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:11.080 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:11.080 [189/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:11.080 [190/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:11.080 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:11.080 [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:11.080 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:11.080 [194/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:11.080 [195/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.080 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.080 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:11.080 [198/268] Linking static target drivers/librte_bus_vdev.a 00:03:11.080 [199/268] Linking static target lib/librte_cryptodev.a 00:03:11.080 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:11.080 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:11.080 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:11.080 [203/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.338 [204/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:11.338 [205/268] Linking static target lib/librte_hash.a 00:03:11.338 [206/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.338 [207/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.338 [208/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:11.338 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:11.338 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:11.338 [211/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:11.338 [212/268] Linking static target drivers/librte_bus_pci.a 00:03:11.338 [213/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:11.338 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.338 [215/268] Linking static target lib/librte_ethdev.a 00:03:11.338 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.338 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:11.338 [218/268] Linking static target drivers/librte_mempool_ring.a 00:03:11.338 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.596 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.596 [221/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:11.596 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.596 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.855 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.112 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.112 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.112 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.371 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:12.371 [229/268] Linking static target lib/librte_vhost.a 00:03:12.936 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.311 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.578 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.514 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.514 [234/268] Linking target lib/librte_eal.so.24.1 00:03:20.773 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:20.773 [236/268] Linking target lib/librte_timer.so.24.1 00:03:20.773 [237/268] Linking target lib/librte_pci.so.24.1 00:03:20.773 [238/268] Linking target lib/librte_ring.so.24.1 00:03:20.773 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:20.773 [240/268] Linking target lib/librte_meter.so.24.1 00:03:20.773 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:21.032 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:21.032 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:21.032 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:21.032 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:21.032 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:21.032 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:21.032 [248/268] Linking target lib/librte_rcu.so.24.1 00:03:21.032 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:21.291 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:21.291 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:21.291 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:21.291 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:21.550 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:21.550 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:21.550 [256/268] Linking target lib/librte_net.so.24.1 00:03:21.550 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:21.550 [258/268] Linking target lib/librte_reorder.so.24.1 00:03:21.550 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:21.809 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:21.809 [261/268] Linking target lib/librte_hash.so.24.1 00:03:21.810 [262/268] Linking target lib/librte_security.so.24.1 00:03:21.810 [263/268] Linking target lib/librte_cmdline.so.24.1 00:03:21.810 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:21.810 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:22.068 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:22.068 [267/268] Linking target lib/librte_power.so.24.1 00:03:22.068 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:22.068 INFO: autodetecting backend as ninja 00:03:22.068 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 88 00:03:23.004 CC lib/ut/ut.o 00:03:23.004 CC lib/ut_mock/mock.o 00:03:23.262 CC lib/log/log.o 00:03:23.262 CC lib/log/log_flags.o 00:03:23.262 CC lib/log/log_deprecated.o 00:03:23.262 LIB libspdk_ut.a 00:03:23.262 LIB libspdk_log.a 00:03:23.262 LIB libspdk_ut_mock.a 00:03:23.520 CXX lib/trace_parser/trace.o 00:03:23.520 CC lib/ioat/ioat.o 00:03:23.520 CC lib/util/base64.o 00:03:23.520 CC lib/util/bit_array.o 00:03:23.520 CC lib/util/cpuset.o 00:03:23.520 CC lib/util/crc16.o 00:03:23.520 CC lib/util/crc32.o 00:03:23.520 CC lib/util/crc32c.o 00:03:23.520 CC lib/util/crc32_ieee.o 00:03:23.520 CC lib/util/crc64.o 00:03:23.520 CC lib/util/dif.o 00:03:23.520 CC lib/util/fd.o 00:03:23.520 CC lib/util/file.o 00:03:23.520 CC lib/util/hexlify.o 00:03:23.520 CC lib/util/iov.o 00:03:23.520 CC lib/util/pipe.o 00:03:23.520 CC lib/util/math.o 00:03:23.520 CC lib/dma/dma.o 00:03:23.520 CC lib/util/strerror_tls.o 00:03:23.520 CC lib/util/string.o 00:03:23.520 CC lib/util/uuid.o 00:03:23.520 CC lib/util/fd_group.o 00:03:23.520 CC lib/util/xor.o 00:03:23.520 CC lib/util/zipf.o 00:03:23.779 CC lib/vfio_user/host/vfio_user_pci.o 00:03:23.779 CC lib/vfio_user/host/vfio_user.o 00:03:23.779 LIB libspdk_dma.a 00:03:23.779 LIB libspdk_ioat.a 00:03:24.037 LIB libspdk_vfio_user.a 00:03:24.037 LIB libspdk_util.a 00:03:24.296 LIB libspdk_trace_parser.a 00:03:24.296 CC lib/conf/conf.o 00:03:24.296 CC lib/env_dpdk/env.o 00:03:24.296 CC lib/env_dpdk/memory.o 00:03:24.296 CC lib/env_dpdk/pci.o 00:03:24.296 CC lib/env_dpdk/init.o 00:03:24.296 CC lib/env_dpdk/threads.o 00:03:24.296 CC lib/env_dpdk/pci_ioat.o 00:03:24.296 CC lib/env_dpdk/pci_virtio.o 00:03:24.296 CC lib/env_dpdk/pci_vmd.o 00:03:24.296 CC lib/env_dpdk/sigbus_handler.o 00:03:24.296 CC lib/env_dpdk/pci_idxd.o 00:03:24.296 CC lib/env_dpdk/pci_event.o 00:03:24.296 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:24.296 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:24.296 CC lib/env_dpdk/pci_dpdk.o 00:03:24.296 CC lib/json/json_parse.o 00:03:24.296 CC lib/rdma/rdma_verbs.o 00:03:24.296 CC lib/vmd/vmd.o 00:03:24.296 CC lib/rdma/common.o 00:03:24.296 CC lib/vmd/led.o 00:03:24.296 CC lib/json/json_util.o 00:03:24.296 CC lib/idxd/idxd.o 00:03:24.296 CC lib/json/json_write.o 00:03:24.296 CC lib/idxd/idxd_user.o 00:03:24.296 CC lib/idxd/idxd_kernel.o 00:03:24.555 LIB libspdk_conf.a 00:03:24.555 LIB libspdk_rdma.a 00:03:24.555 LIB libspdk_json.a 00:03:24.813 LIB libspdk_idxd.a 00:03:24.813 LIB libspdk_vmd.a 00:03:24.813 CC lib/jsonrpc/jsonrpc_server.o 00:03:24.813 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:24.813 CC lib/jsonrpc/jsonrpc_client.o 00:03:24.813 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:25.072 LIB libspdk_jsonrpc.a 00:03:25.330 CC lib/rpc/rpc.o 00:03:25.589 LIB libspdk_rpc.a 00:03:25.847 LIB libspdk_env_dpdk.a 00:03:25.847 CC lib/trace/trace.o 00:03:25.847 CC lib/trace/trace_flags.o 00:03:25.847 CC lib/trace/trace_rpc.o 00:03:25.847 CC lib/keyring/keyring_rpc.o 00:03:25.847 CC lib/keyring/keyring.o 00:03:25.847 CC lib/notify/notify.o 00:03:25.847 CC lib/notify/notify_rpc.o 00:03:26.105 LIB libspdk_notify.a 00:03:26.105 LIB libspdk_trace.a 00:03:26.105 LIB libspdk_keyring.a 00:03:26.363 CC lib/sock/sock.o 00:03:26.363 CC lib/sock/sock_rpc.o 00:03:26.363 CC lib/thread/thread.o 00:03:26.363 CC lib/thread/iobuf.o 00:03:26.929 LIB libspdk_sock.a 00:03:27.188 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:27.188 CC lib/nvme/nvme_ctrlr.o 00:03:27.188 CC lib/nvme/nvme_fabric.o 00:03:27.188 CC lib/nvme/nvme_ns_cmd.o 00:03:27.188 CC lib/nvme/nvme_ns.o 00:03:27.188 CC lib/nvme/nvme_pcie_common.o 00:03:27.188 CC lib/nvme/nvme_pcie.o 00:03:27.188 CC lib/nvme/nvme_qpair.o 00:03:27.188 CC lib/nvme/nvme.o 00:03:27.188 CC lib/nvme/nvme_quirks.o 00:03:27.188 CC lib/nvme/nvme_discovery.o 00:03:27.188 CC lib/nvme/nvme_transport.o 00:03:27.188 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:27.188 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:27.188 CC lib/nvme/nvme_tcp.o 00:03:27.188 CC lib/nvme/nvme_opal.o 00:03:27.188 CC lib/nvme/nvme_io_msg.o 00:03:27.188 CC lib/nvme/nvme_poll_group.o 00:03:27.188 CC lib/nvme/nvme_zns.o 00:03:27.188 CC lib/nvme/nvme_stubs.o 00:03:27.188 CC lib/nvme/nvme_auth.o 00:03:27.188 CC lib/nvme/nvme_cuse.o 00:03:27.188 CC lib/nvme/nvme_vfio_user.o 00:03:27.188 CC lib/nvme/nvme_rdma.o 00:03:27.756 LIB libspdk_thread.a 00:03:28.014 CC lib/blob/blobstore.o 00:03:28.014 CC lib/blob/zeroes.o 00:03:28.014 CC lib/blob/request.o 00:03:28.014 CC lib/blob/blob_bs_dev.o 00:03:28.014 CC lib/accel/accel.o 00:03:28.014 CC lib/accel/accel_rpc.o 00:03:28.014 CC lib/accel/accel_sw.o 00:03:28.014 CC lib/vfu_tgt/tgt_endpoint.o 00:03:28.014 CC lib/vfu_tgt/tgt_rpc.o 00:03:28.014 CC lib/virtio/virtio.o 00:03:28.014 CC lib/virtio/virtio_vhost_user.o 00:03:28.014 CC lib/virtio/virtio_vfio_user.o 00:03:28.014 CC lib/virtio/virtio_pci.o 00:03:28.014 CC lib/init/json_config.o 00:03:28.014 CC lib/init/subsystem.o 00:03:28.014 CC lib/init/subsystem_rpc.o 00:03:28.014 CC lib/init/rpc.o 00:03:28.272 LIB libspdk_vfu_tgt.a 00:03:28.272 LIB libspdk_init.a 00:03:28.272 LIB libspdk_virtio.a 00:03:28.530 CC lib/event/app.o 00:03:28.530 CC lib/event/reactor.o 00:03:28.530 CC lib/event/log_rpc.o 00:03:28.530 CC lib/event/app_rpc.o 00:03:28.530 CC lib/event/scheduler_static.o 00:03:28.789 LIB libspdk_event.a 00:03:29.047 LIB libspdk_accel.a 00:03:29.047 LIB libspdk_nvme.a 00:03:29.307 CC lib/bdev/bdev.o 00:03:29.307 CC lib/bdev/bdev_rpc.o 00:03:29.307 CC lib/bdev/part.o 00:03:29.307 CC lib/bdev/bdev_zone.o 00:03:29.307 CC lib/bdev/scsi_nvme.o 00:03:30.684 LIB libspdk_blob.a 00:03:30.941 CC lib/blobfs/blobfs.o 00:03:30.941 CC lib/blobfs/tree.o 00:03:30.941 CC lib/lvol/lvol.o 00:03:31.508 LIB libspdk_lvol.a 00:03:31.767 LIB libspdk_blobfs.a 00:03:32.042 LIB libspdk_bdev.a 00:03:32.354 CC lib/scsi/dev.o 00:03:32.354 CC lib/scsi/lun.o 00:03:32.354 CC lib/scsi/port.o 00:03:32.354 CC lib/scsi/scsi.o 00:03:32.354 CC lib/scsi/scsi_bdev.o 00:03:32.354 CC lib/scsi/scsi_pr.o 00:03:32.354 CC lib/scsi/scsi_rpc.o 00:03:32.354 CC lib/scsi/task.o 00:03:32.354 CC lib/nbd/nbd.o 00:03:32.354 CC lib/nbd/nbd_rpc.o 00:03:32.354 CC lib/ftl/ftl_core.o 00:03:32.354 CC lib/ftl/ftl_init.o 00:03:32.354 CC lib/ftl/ftl_layout.o 00:03:32.354 CC lib/ftl/ftl_debug.o 00:03:32.354 CC lib/nvmf/ctrlr.o 00:03:32.354 CC lib/ftl/ftl_io.o 00:03:32.354 CC lib/ftl/ftl_sb.o 00:03:32.354 CC lib/ublk/ublk.o 00:03:32.354 CC lib/ftl/ftl_l2p_flat.o 00:03:32.354 CC lib/ftl/ftl_l2p.o 00:03:32.354 CC lib/nvmf/ctrlr_discovery.o 00:03:32.354 CC lib/ublk/ublk_rpc.o 00:03:32.354 CC lib/nvmf/ctrlr_bdev.o 00:03:32.354 CC lib/nvmf/nvmf.o 00:03:32.354 CC lib/nvmf/subsystem.o 00:03:32.354 CC lib/ftl/ftl_nv_cache.o 00:03:32.354 CC lib/ftl/ftl_band.o 00:03:32.354 CC lib/nvmf/nvmf_rpc.o 00:03:32.354 CC lib/ftl/ftl_band_ops.o 00:03:32.354 CC lib/nvmf/transport.o 00:03:32.354 CC lib/ftl/ftl_writer.o 00:03:32.354 CC lib/nvmf/tcp.o 00:03:32.354 CC lib/ftl/ftl_rq.o 00:03:32.354 CC lib/ftl/ftl_reloc.o 00:03:32.354 CC lib/nvmf/stubs.o 00:03:32.354 CC lib/nvmf/mdns_server.o 00:03:32.354 CC lib/ftl/ftl_l2p_cache.o 00:03:32.354 CC lib/ftl/ftl_p2l.o 00:03:32.354 CC lib/nvmf/vfio_user.o 00:03:32.354 CC lib/nvmf/rdma.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:32.354 CC lib/nvmf/auth.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:32.354 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:32.354 CC lib/ftl/utils/ftl_conf.o 00:03:32.354 CC lib/ftl/utils/ftl_md.o 00:03:32.354 CC lib/ftl/utils/ftl_mempool.o 00:03:32.354 CC lib/ftl/utils/ftl_bitmap.o 00:03:32.354 CC lib/ftl/utils/ftl_property.o 00:03:32.354 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:32.354 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:32.354 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:32.354 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:32.354 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:32.354 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:32.354 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:32.354 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:32.354 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:32.354 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:32.354 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:32.354 CC lib/ftl/base/ftl_base_dev.o 00:03:32.354 CC lib/ftl/base/ftl_base_bdev.o 00:03:32.354 CC lib/ftl/ftl_trace.o 00:03:32.963 LIB libspdk_nbd.a 00:03:32.963 LIB libspdk_scsi.a 00:03:32.963 LIB libspdk_ublk.a 00:03:32.963 CC lib/iscsi/conn.o 00:03:32.963 CC lib/iscsi/md5.o 00:03:32.963 CC lib/iscsi/iscsi.o 00:03:32.963 CC lib/iscsi/init_grp.o 00:03:32.963 CC lib/vhost/vhost.o 00:03:32.963 CC lib/iscsi/param.o 00:03:32.963 CC lib/vhost/vhost_rpc.o 00:03:32.963 CC lib/vhost/vhost_blk.o 00:03:32.963 CC lib/iscsi/portal_grp.o 00:03:32.963 CC lib/vhost/vhost_scsi.o 00:03:32.963 CC lib/iscsi/tgt_node.o 00:03:32.963 CC lib/iscsi/iscsi_subsystem.o 00:03:32.963 CC lib/vhost/rte_vhost_user.o 00:03:32.963 CC lib/iscsi/iscsi_rpc.o 00:03:32.963 CC lib/iscsi/task.o 00:03:33.220 LIB libspdk_ftl.a 00:03:34.153 LIB libspdk_vhost.a 00:03:34.153 LIB libspdk_nvmf.a 00:03:34.153 LIB libspdk_iscsi.a 00:03:34.719 CC module/vfu_device/vfu_virtio.o 00:03:34.719 CC module/vfu_device/vfu_virtio_blk.o 00:03:34.719 CC module/vfu_device/vfu_virtio_scsi.o 00:03:34.719 CC module/vfu_device/vfu_virtio_rpc.o 00:03:34.719 CC module/env_dpdk/env_dpdk_rpc.o 00:03:34.719 CC module/sock/posix/posix.o 00:03:34.719 CC module/keyring/linux/keyring.o 00:03:34.719 CC module/accel/iaa/accel_iaa.o 00:03:34.719 CC module/keyring/linux/keyring_rpc.o 00:03:34.719 CC module/accel/iaa/accel_iaa_rpc.o 00:03:34.719 CC module/keyring/file/keyring.o 00:03:34.719 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:34.719 CC module/keyring/file/keyring_rpc.o 00:03:34.719 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:34.719 CC module/scheduler/gscheduler/gscheduler.o 00:03:34.719 CC module/accel/dsa/accel_dsa.o 00:03:34.719 CC module/blob/bdev/blob_bdev.o 00:03:34.719 CC module/accel/dsa/accel_dsa_rpc.o 00:03:34.719 CC module/accel/ioat/accel_ioat.o 00:03:34.719 CC module/accel/ioat/accel_ioat_rpc.o 00:03:34.719 CC module/accel/error/accel_error.o 00:03:34.719 CC module/accel/error/accel_error_rpc.o 00:03:34.719 LIB libspdk_env_dpdk_rpc.a 00:03:34.977 LIB libspdk_keyring_linux.a 00:03:34.977 LIB libspdk_keyring_file.a 00:03:34.977 LIB libspdk_scheduler_gscheduler.a 00:03:34.977 LIB libspdk_scheduler_dpdk_governor.a 00:03:34.977 LIB libspdk_blob_bdev.a 00:03:34.977 LIB libspdk_scheduler_dynamic.a 00:03:34.977 LIB libspdk_accel_error.a 00:03:34.977 LIB libspdk_accel_iaa.a 00:03:34.977 LIB libspdk_accel_ioat.a 00:03:34.977 LIB libspdk_accel_dsa.a 00:03:35.235 LIB libspdk_vfu_device.a 00:03:35.235 CC module/bdev/gpt/gpt.o 00:03:35.235 CC module/bdev/gpt/vbdev_gpt.o 00:03:35.235 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:35.235 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:35.235 CC module/blobfs/bdev/blobfs_bdev.o 00:03:35.235 CC module/bdev/lvol/vbdev_lvol.o 00:03:35.235 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:35.235 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:35.235 CC module/bdev/iscsi/bdev_iscsi.o 00:03:35.235 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:35.235 CC module/bdev/split/vbdev_split.o 00:03:35.235 CC module/bdev/ftl/bdev_ftl.o 00:03:35.235 CC module/bdev/split/vbdev_split_rpc.o 00:03:35.235 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:35.235 CC module/bdev/null/bdev_null.o 00:03:35.235 CC module/bdev/null/bdev_null_rpc.o 00:03:35.235 CC module/bdev/malloc/bdev_malloc.o 00:03:35.235 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:35.235 CC module/bdev/aio/bdev_aio.o 00:03:35.235 CC module/bdev/aio/bdev_aio_rpc.o 00:03:35.235 CC module/bdev/raid/bdev_raid.o 00:03:35.235 CC module/bdev/raid/bdev_raid_rpc.o 00:03:35.235 CC module/bdev/raid/bdev_raid_sb.o 00:03:35.494 CC module/bdev/error/vbdev_error.o 00:03:35.494 CC module/bdev/error/vbdev_error_rpc.o 00:03:35.494 CC module/bdev/raid/raid0.o 00:03:35.494 CC module/bdev/raid/raid1.o 00:03:35.494 CC module/bdev/raid/concat.o 00:03:35.494 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:35.494 CC module/bdev/passthru/vbdev_passthru.o 00:03:35.494 CC module/bdev/nvme/bdev_nvme.o 00:03:35.494 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:35.494 CC module/bdev/nvme/nvme_rpc.o 00:03:35.494 CC module/bdev/nvme/vbdev_opal.o 00:03:35.494 CC module/bdev/nvme/bdev_mdns_client.o 00:03:35.494 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:35.494 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:35.494 CC module/bdev/delay/vbdev_delay.o 00:03:35.494 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:35.494 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:35.494 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:35.494 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:35.494 LIB libspdk_sock_posix.a 00:03:35.494 LIB libspdk_blobfs_bdev.a 00:03:35.494 LIB libspdk_bdev_gpt.a 00:03:35.494 LIB libspdk_bdev_null.a 00:03:35.494 LIB libspdk_bdev_split.a 00:03:35.777 LIB libspdk_bdev_iscsi.a 00:03:35.777 LIB libspdk_bdev_error.a 00:03:35.777 LIB libspdk_bdev_ftl.a 00:03:35.777 LIB libspdk_bdev_passthru.a 00:03:35.777 LIB libspdk_bdev_zone_block.a 00:03:35.777 LIB libspdk_bdev_aio.a 00:03:35.777 LIB libspdk_bdev_delay.a 00:03:35.777 LIB libspdk_bdev_malloc.a 00:03:35.777 LIB libspdk_bdev_lvol.a 00:03:35.777 LIB libspdk_bdev_virtio.a 00:03:36.342 LIB libspdk_bdev_raid.a 00:03:37.277 LIB libspdk_bdev_nvme.a 00:03:37.844 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:37.844 CC module/event/subsystems/vmd/vmd.o 00:03:37.844 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:37.844 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:37.844 CC module/event/subsystems/iobuf/iobuf.o 00:03:37.844 CC module/event/subsystems/scheduler/scheduler.o 00:03:37.844 CC module/event/subsystems/keyring/keyring.o 00:03:37.844 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:37.844 CC module/event/subsystems/sock/sock.o 00:03:38.103 LIB libspdk_event_vhost_blk.a 00:03:38.103 LIB libspdk_event_vmd.a 00:03:38.103 LIB libspdk_event_keyring.a 00:03:38.103 LIB libspdk_event_scheduler.a 00:03:38.103 LIB libspdk_event_vfu_tgt.a 00:03:38.103 LIB libspdk_event_sock.a 00:03:38.103 LIB libspdk_event_iobuf.a 00:03:38.361 CC module/event/subsystems/accel/accel.o 00:03:38.619 LIB libspdk_event_accel.a 00:03:38.878 CC module/event/subsystems/bdev/bdev.o 00:03:38.878 LIB libspdk_event_bdev.a 00:03:39.136 CC module/event/subsystems/scsi/scsi.o 00:03:39.136 CC module/event/subsystems/nbd/nbd.o 00:03:39.136 CC module/event/subsystems/ublk/ublk.o 00:03:39.397 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:39.397 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:39.397 LIB libspdk_event_ublk.a 00:03:39.397 LIB libspdk_event_nbd.a 00:03:39.397 LIB libspdk_event_scsi.a 00:03:39.397 LIB libspdk_event_nvmf.a 00:03:39.655 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:39.655 CC module/event/subsystems/iscsi/iscsi.o 00:03:39.913 LIB libspdk_event_vhost_scsi.a 00:03:39.913 LIB libspdk_event_iscsi.a 00:03:40.176 TEST_HEADER include/spdk/accel.h 00:03:40.176 TEST_HEADER include/spdk/accel_module.h 00:03:40.176 TEST_HEADER include/spdk/barrier.h 00:03:40.176 CXX app/trace/trace.o 00:03:40.176 TEST_HEADER include/spdk/base64.h 00:03:40.176 TEST_HEADER include/spdk/assert.h 00:03:40.176 TEST_HEADER include/spdk/bdev.h 00:03:40.176 TEST_HEADER include/spdk/bdev_module.h 00:03:40.176 TEST_HEADER include/spdk/bdev_zone.h 00:03:40.176 TEST_HEADER include/spdk/bit_array.h 00:03:40.176 TEST_HEADER include/spdk/blob_bdev.h 00:03:40.176 TEST_HEADER include/spdk/bit_pool.h 00:03:40.176 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:40.176 TEST_HEADER include/spdk/blobfs.h 00:03:40.176 TEST_HEADER include/spdk/blob.h 00:03:40.176 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.176 CC app/trace_record/trace_record.o 00:03:40.176 TEST_HEADER include/spdk/config.h 00:03:40.176 TEST_HEADER include/spdk/conf.h 00:03:40.176 TEST_HEADER include/spdk/cpuset.h 00:03:40.176 CC app/spdk_top/spdk_top.o 00:03:40.176 TEST_HEADER include/spdk/crc16.h 00:03:40.176 TEST_HEADER include/spdk/crc32.h 00:03:40.176 CC app/spdk_nvme_perf/perf.o 00:03:40.176 CC test/rpc_client/rpc_client_test.o 00:03:40.176 TEST_HEADER include/spdk/crc64.h 00:03:40.176 TEST_HEADER include/spdk/dif.h 00:03:40.176 TEST_HEADER include/spdk/endian.h 00:03:40.176 TEST_HEADER include/spdk/dma.h 00:03:40.176 TEST_HEADER include/spdk/env.h 00:03:40.176 TEST_HEADER include/spdk/env_dpdk.h 00:03:40.176 TEST_HEADER include/spdk/event.h 00:03:40.176 TEST_HEADER include/spdk/fd_group.h 00:03:40.176 TEST_HEADER include/spdk/fd.h 00:03:40.176 CC app/spdk_lspci/spdk_lspci.o 00:03:40.176 CC app/spdk_nvme_identify/identify.o 00:03:40.176 TEST_HEADER include/spdk/file.h 00:03:40.176 TEST_HEADER include/spdk/ftl.h 00:03:40.176 TEST_HEADER include/spdk/gpt_spec.h 00:03:40.176 TEST_HEADER include/spdk/hexlify.h 00:03:40.176 TEST_HEADER include/spdk/histogram_data.h 00:03:40.176 TEST_HEADER include/spdk/idxd.h 00:03:40.176 TEST_HEADER include/spdk/idxd_spec.h 00:03:40.176 TEST_HEADER include/spdk/init.h 00:03:40.177 TEST_HEADER include/spdk/ioat.h 00:03:40.177 TEST_HEADER include/spdk/ioat_spec.h 00:03:40.177 TEST_HEADER include/spdk/iscsi_spec.h 00:03:40.177 TEST_HEADER include/spdk/json.h 00:03:40.177 TEST_HEADER include/spdk/jsonrpc.h 00:03:40.177 TEST_HEADER include/spdk/keyring.h 00:03:40.177 TEST_HEADER include/spdk/keyring_module.h 00:03:40.177 TEST_HEADER include/spdk/likely.h 00:03:40.177 TEST_HEADER include/spdk/log.h 00:03:40.177 TEST_HEADER include/spdk/lvol.h 00:03:40.177 TEST_HEADER include/spdk/memory.h 00:03:40.177 TEST_HEADER include/spdk/mmio.h 00:03:40.177 TEST_HEADER include/spdk/nbd.h 00:03:40.177 TEST_HEADER include/spdk/notify.h 00:03:40.177 TEST_HEADER include/spdk/nvme.h 00:03:40.177 TEST_HEADER include/spdk/nvme_intel.h 00:03:40.177 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:40.177 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:40.177 TEST_HEADER include/spdk/nvme_spec.h 00:03:40.177 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:40.177 TEST_HEADER include/spdk/nvme_zns.h 00:03:40.177 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:40.177 TEST_HEADER include/spdk/nvmf.h 00:03:40.177 TEST_HEADER include/spdk/nvmf_spec.h 00:03:40.177 TEST_HEADER include/spdk/nvmf_transport.h 00:03:40.177 TEST_HEADER include/spdk/opal.h 00:03:40.177 TEST_HEADER include/spdk/opal_spec.h 00:03:40.177 TEST_HEADER include/spdk/pci_ids.h 00:03:40.177 TEST_HEADER include/spdk/pipe.h 00:03:40.177 TEST_HEADER include/spdk/queue.h 00:03:40.177 TEST_HEADER include/spdk/reduce.h 00:03:40.177 TEST_HEADER include/spdk/rpc.h 00:03:40.177 TEST_HEADER include/spdk/scsi.h 00:03:40.177 TEST_HEADER include/spdk/scheduler.h 00:03:40.177 TEST_HEADER include/spdk/sock.h 00:03:40.177 TEST_HEADER include/spdk/scsi_spec.h 00:03:40.177 TEST_HEADER include/spdk/stdinc.h 00:03:40.177 TEST_HEADER include/spdk/string.h 00:03:40.177 TEST_HEADER include/spdk/thread.h 00:03:40.177 TEST_HEADER include/spdk/trace.h 00:03:40.177 TEST_HEADER include/spdk/trace_parser.h 00:03:40.177 TEST_HEADER include/spdk/tree.h 00:03:40.177 TEST_HEADER include/spdk/ublk.h 00:03:40.177 TEST_HEADER include/spdk/util.h 00:03:40.177 TEST_HEADER include/spdk/uuid.h 00:03:40.177 TEST_HEADER include/spdk/version.h 00:03:40.177 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:40.177 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:40.177 CC app/spdk_dd/spdk_dd.o 00:03:40.177 TEST_HEADER include/spdk/vhost.h 00:03:40.177 TEST_HEADER include/spdk/vmd.h 00:03:40.177 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:40.177 TEST_HEADER include/spdk/xor.h 00:03:40.177 TEST_HEADER include/spdk/zipf.h 00:03:40.177 CXX test/cpp_headers/accel.o 00:03:40.177 CXX test/cpp_headers/accel_module.o 00:03:40.177 CXX test/cpp_headers/assert.o 00:03:40.177 CXX test/cpp_headers/barrier.o 00:03:40.177 CXX test/cpp_headers/base64.o 00:03:40.177 CXX test/cpp_headers/bdev.o 00:03:40.177 CXX test/cpp_headers/bdev_module.o 00:03:40.177 CXX test/cpp_headers/bdev_zone.o 00:03:40.177 CC app/nvmf_tgt/nvmf_main.o 00:03:40.177 CXX test/cpp_headers/bit_array.o 00:03:40.177 CXX test/cpp_headers/bit_pool.o 00:03:40.177 CXX test/cpp_headers/blobfs_bdev.o 00:03:40.177 CXX test/cpp_headers/blob_bdev.o 00:03:40.177 CC app/iscsi_tgt/iscsi_tgt.o 00:03:40.441 CC test/app/jsoncat/jsoncat.o 00:03:40.441 CC app/vhost/vhost.o 00:03:40.441 CC test/env/memory/memory_ut.o 00:03:40.441 CC test/env/pci/pci_ut.o 00:03:40.441 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.441 CC test/event/event_perf/event_perf.o 00:03:40.441 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:40.441 CC test/app/histogram_perf/histogram_perf.o 00:03:40.441 CC examples/nvme/hello_world/hello_world.o 00:03:40.441 CC test/env/vtophys/vtophys.o 00:03:40.441 CC app/spdk_tgt/spdk_tgt.o 00:03:40.441 CC examples/vmd/led/led.o 00:03:40.441 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:40.441 CC test/app/stub/stub.o 00:03:40.441 CC test/blobfs/mkfs/mkfs.o 00:03:40.441 CC test/event/reactor_perf/reactor_perf.o 00:03:40.441 CC test/thread/lock/spdk_lock.o 00:03:40.441 CC test/event/reactor/reactor.o 00:03:40.441 CC app/fio/nvme/fio_plugin.o 00:03:40.441 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:40.441 CC examples/idxd/perf/perf.o 00:03:40.441 CC examples/ioat/perf/perf.o 00:03:40.441 CC test/nvme/err_injection/err_injection.o 00:03:40.441 CC examples/ioat/verify/verify.o 00:03:40.441 CC test/nvme/boot_partition/boot_partition.o 00:03:40.441 CC test/nvme/e2edp/nvme_dp.o 00:03:40.441 CC test/thread/poller_perf/poller_perf.o 00:03:40.441 CC test/nvme/reserve/reserve.o 00:03:40.441 CC examples/nvme/reconnect/reconnect.o 00:03:40.441 CC examples/nvme/abort/abort.o 00:03:40.441 CC test/nvme/cuse/cuse.o 00:03:40.441 CC test/nvme/aer/aer.o 00:03:40.441 CC test/nvme/fused_ordering/fused_ordering.o 00:03:40.441 CC test/event/app_repeat/app_repeat.o 00:03:40.441 CC test/nvme/sgl/sgl.o 00:03:40.441 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.441 CC examples/nvme/hotplug/hotplug.o 00:03:40.441 CC test/nvme/reset/reset.o 00:03:40.441 CC test/nvme/startup/startup.o 00:03:40.441 CC examples/nvme/arbitration/arbitration.o 00:03:40.441 CC examples/sock/hello_world/hello_sock.o 00:03:40.441 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:40.441 CC examples/util/zipf/zipf.o 00:03:40.441 CC test/nvme/overhead/overhead.o 00:03:40.441 CC examples/accel/perf/accel_perf.o 00:03:40.441 CC test/nvme/connect_stress/connect_stress.o 00:03:40.441 CC test/nvme/fdp/fdp.o 00:03:40.441 CC test/accel/dif/dif.o 00:03:40.441 CC test/nvme/compliance/nvme_compliance.o 00:03:40.441 LINK spdk_lspci 00:03:40.441 CC test/nvme/simple_copy/simple_copy.o 00:03:40.441 CC test/bdev/bdevio/bdevio.o 00:03:40.441 CC test/app/bdev_svc/bdev_svc.o 00:03:40.441 CC examples/blob/cli/blobcli.o 00:03:40.441 CC test/event/scheduler/scheduler.o 00:03:40.441 CC test/dma/test_dma/test_dma.o 00:03:40.441 CC test/env/mem_callbacks/mem_callbacks.o 00:03:40.441 CC examples/blob/hello_world/hello_blob.o 00:03:40.441 LINK rpc_client_test 00:03:40.441 CC app/fio/bdev/fio_plugin.o 00:03:40.441 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.441 CC examples/thread/thread/thread_ex.o 00:03:40.441 CC examples/bdev/bdevperf/bdevperf.o 00:03:40.441 CC examples/bdev/hello_world/hello_bdev.o 00:03:40.441 CC test/lvol/esnap/esnap.o 00:03:40.441 CC examples/nvmf/nvmf/nvmf.o 00:03:40.441 LINK histogram_perf 00:03:40.441 LINK led 00:03:40.441 LINK spdk_nvme_discover 00:03:40.699 CXX test/cpp_headers/blobfs.o 00:03:40.699 LINK jsoncat 00:03:40.699 CXX test/cpp_headers/blob.o 00:03:40.699 CXX test/cpp_headers/conf.o 00:03:40.699 LINK event_perf 00:03:40.699 LINK vtophys 00:03:40.699 LINK reactor 00:03:40.699 LINK reactor_perf 00:03:40.699 LINK vhost 00:03:40.699 LINK boot_partition 00:03:40.699 LINK lsvmd 00:03:40.699 LINK env_dpdk_post_init 00:03:40.699 LINK interrupt_tgt 00:03:40.699 CXX test/cpp_headers/config.o 00:03:40.699 LINK spdk_trace_record 00:03:40.699 fio_plugin.c:1559:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:40.699 struct spdk_nvme_fdp_ruhs ruhs; 00:03:40.699 ^ 00:03:40.699 LINK poller_perf 00:03:40.699 CXX test/cpp_headers/cpuset.o 00:03:40.699 LINK connect_stress 00:03:40.699 LINK nvmf_tgt 00:03:40.699 LINK zipf 00:03:40.699 LINK hello_world 00:03:40.699 CXX test/cpp_headers/crc16.o 00:03:40.699 CXX test/cpp_headers/crc32.o 00:03:40.699 LINK app_repeat 00:03:40.699 LINK doorbell_aers 00:03:40.699 CXX test/cpp_headers/crc64.o 00:03:40.699 LINK stub 00:03:40.699 LINK iscsi_tgt 00:03:40.699 LINK err_injection 00:03:40.699 CXX test/cpp_headers/dif.o 00:03:40.699 LINK startup 00:03:40.699 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.699 LINK cmb_copy 00:03:40.699 LINK spdk_trace 00:03:40.699 LINK reserve 00:03:40.699 LINK fused_ordering 00:03:40.699 LINK verify 00:03:40.699 LINK mkfs 00:03:40.699 LINK pmr_persistence 00:03:40.699 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:40.699 LINK ioat_perf 00:03:40.699 LINK bdev_svc 00:03:40.699 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:03:40.699 LINK hotplug 00:03:40.699 LINK spdk_tgt 00:03:40.699 LINK idxd_perf 00:03:40.699 LINK hello_sock 00:03:40.699 LINK simple_copy 00:03:40.699 LINK hello_blob 00:03:40.699 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:03:40.699 LINK nvme_dp 00:03:40.699 LINK fdp 00:03:40.962 LINK aer 00:03:40.962 LINK reset 00:03:40.962 LINK sgl 00:03:40.962 LINK overhead 00:03:40.962 LINK scheduler 00:03:40.962 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:40.962 CXX test/cpp_headers/dma.o 00:03:40.962 CXX test/cpp_headers/endian.o 00:03:40.962 LINK thread 00:03:40.962 CXX test/cpp_headers/env_dpdk.o 00:03:40.962 CXX test/cpp_headers/env.o 00:03:40.962 LINK hello_bdev 00:03:40.962 LINK nvmf 00:03:40.962 CXX test/cpp_headers/event.o 00:03:40.962 CXX test/cpp_headers/fd_group.o 00:03:40.962 LINK reconnect 00:03:40.962 CXX test/cpp_headers/fd.o 00:03:40.962 LINK abort 00:03:40.962 LINK arbitration 00:03:40.962 CXX test/cpp_headers/file.o 00:03:40.962 1 warning generated. 00:03:40.962 LINK bdevio 00:03:40.962 LINK nvme_manage 00:03:40.962 LINK test_dma 00:03:40.962 LINK pci_ut 00:03:40.962 LINK spdk_nvme 00:03:40.962 LINK nvme_compliance 00:03:40.962 LINK accel_perf 00:03:41.220 CXX test/cpp_headers/ftl.o 00:03:41.220 LINK spdk_dd 00:03:41.220 CXX test/cpp_headers/gpt_spec.o 00:03:41.220 LINK nvme_fuzz 00:03:41.220 LINK dif 00:03:41.220 CXX test/cpp_headers/hexlify.o 00:03:41.220 CXX test/cpp_headers/histogram_data.o 00:03:41.220 CXX test/cpp_headers/idxd.o 00:03:41.220 LINK blobcli 00:03:41.220 CXX test/cpp_headers/idxd_spec.o 00:03:41.220 CXX test/cpp_headers/init.o 00:03:41.220 CXX test/cpp_headers/ioat.o 00:03:41.220 CXX test/cpp_headers/ioat_spec.o 00:03:41.220 CXX test/cpp_headers/iscsi_spec.o 00:03:41.220 LINK spdk_bdev 00:03:41.220 LINK llvm_vfio_fuzz 00:03:41.220 CXX test/cpp_headers/json.o 00:03:41.220 CXX test/cpp_headers/jsonrpc.o 00:03:41.220 CXX test/cpp_headers/keyring.o 00:03:41.220 CXX test/cpp_headers/keyring_module.o 00:03:41.220 CXX test/cpp_headers/likely.o 00:03:41.220 LINK mem_callbacks 00:03:41.220 LINK spdk_nvme_identify 00:03:41.220 CXX test/cpp_headers/log.o 00:03:41.482 CXX test/cpp_headers/lvol.o 00:03:41.482 CXX test/cpp_headers/memory.o 00:03:41.482 CXX test/cpp_headers/mmio.o 00:03:41.482 CXX test/cpp_headers/nbd.o 00:03:41.482 CXX test/cpp_headers/notify.o 00:03:41.482 CXX test/cpp_headers/nvme.o 00:03:41.482 CXX test/cpp_headers/nvme_intel.o 00:03:41.482 CXX test/cpp_headers/nvme_ocssd.o 00:03:41.482 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:41.482 CXX test/cpp_headers/nvme_spec.o 00:03:41.482 CXX test/cpp_headers/nvme_zns.o 00:03:41.482 CXX test/cpp_headers/nvmf_cmd.o 00:03:41.482 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:41.482 LINK vhost_fuzz 00:03:41.482 CXX test/cpp_headers/nvmf.o 00:03:41.482 LINK spdk_nvme_perf 00:03:41.482 CXX test/cpp_headers/nvmf_spec.o 00:03:41.482 CXX test/cpp_headers/nvmf_transport.o 00:03:41.482 CXX test/cpp_headers/opal.o 00:03:41.482 CXX test/cpp_headers/opal_spec.o 00:03:41.482 CXX test/cpp_headers/pci_ids.o 00:03:41.482 CXX test/cpp_headers/pipe.o 00:03:41.482 CXX test/cpp_headers/queue.o 00:03:41.482 CXX test/cpp_headers/reduce.o 00:03:41.482 CXX test/cpp_headers/rpc.o 00:03:41.482 CXX test/cpp_headers/scheduler.o 00:03:41.749 CXX test/cpp_headers/scsi.o 00:03:41.749 CXX test/cpp_headers/scsi_spec.o 00:03:41.749 CXX test/cpp_headers/sock.o 00:03:41.749 CXX test/cpp_headers/stdinc.o 00:03:41.749 CXX test/cpp_headers/string.o 00:03:41.749 CXX test/cpp_headers/thread.o 00:03:41.749 CXX test/cpp_headers/trace.o 00:03:41.749 CXX test/cpp_headers/trace_parser.o 00:03:41.749 CXX test/cpp_headers/tree.o 00:03:41.749 LINK bdevperf 00:03:41.749 CXX test/cpp_headers/ublk.o 00:03:41.749 CXX test/cpp_headers/util.o 00:03:41.749 LINK spdk_top 00:03:41.749 CXX test/cpp_headers/uuid.o 00:03:41.749 CXX test/cpp_headers/version.o 00:03:41.749 CXX test/cpp_headers/vfio_user_pci.o 00:03:41.749 CXX test/cpp_headers/vfio_user_spec.o 00:03:41.750 CXX test/cpp_headers/vhost.o 00:03:41.750 CXX test/cpp_headers/vmd.o 00:03:41.750 CXX test/cpp_headers/xor.o 00:03:41.750 CXX test/cpp_headers/zipf.o 00:03:42.008 LINK llvm_nvme_fuzz 00:03:42.008 LINK memory_ut 00:03:42.266 LINK cuse 00:03:42.266 LINK spdk_lock 00:03:42.833 LINK iscsi_fuzz 00:03:46.119 LINK esnap 00:03:46.685 00:03:46.685 real 0m47.726s 00:03:46.685 user 8m5.515s 00:03:46.685 sys 2m37.701s 00:03:46.685 13:31:39 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:03:46.685 13:31:39 make -- common/autotest_common.sh@10 -- $ set +x 00:03:46.685 ************************************ 00:03:46.685 END TEST make 00:03:46.685 ************************************ 00:03:46.685 13:31:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:46.685 13:31:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:46.685 13:31:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:46.685 13:31:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.685 13:31:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:46.685 13:31:39 -- pm/common@44 -- $ pid=3311541 00:03:46.685 13:31:39 -- pm/common@50 -- $ kill -TERM 3311541 00:03:46.685 13:31:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.685 13:31:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:46.685 13:31:39 -- pm/common@44 -- $ pid=3311542 00:03:46.685 13:31:39 -- pm/common@50 -- $ kill -TERM 3311542 00:03:46.685 13:31:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.685 13:31:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:46.685 13:31:39 -- pm/common@44 -- $ pid=3311544 00:03:46.685 13:31:39 -- pm/common@50 -- $ kill -TERM 3311544 00:03:46.685 13:31:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.685 13:31:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:46.685 13:31:39 -- pm/common@44 -- $ pid=3311567 00:03:46.685 13:31:39 -- pm/common@50 -- $ sudo -E kill -TERM 3311567 00:03:46.685 13:31:39 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:46.685 13:31:39 -- nvmf/common.sh@7 -- # uname -s 00:03:46.685 13:31:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:46.685 13:31:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:46.685 13:31:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:46.685 13:31:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:46.685 13:31:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:46.685 13:31:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:46.685 13:31:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:46.685 13:31:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:46.685 13:31:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:46.685 13:31:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:46.685 13:31:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8089bee2-271d-eb11-906e-0017a4403562 00:03:46.685 13:31:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8089bee2-271d-eb11-906e-0017a4403562 00:03:46.685 13:31:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:46.685 13:31:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:46.685 13:31:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:46.685 13:31:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:46.685 13:31:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:46.685 13:31:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:46.685 13:31:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:46.685 13:31:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:46.685 13:31:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.685 13:31:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.685 13:31:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.685 13:31:39 -- paths/export.sh@5 -- # export PATH 00:03:46.685 13:31:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:46.685 13:31:39 -- nvmf/common.sh@47 -- # : 0 00:03:46.685 13:31:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:46.685 13:31:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:46.685 13:31:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:46.685 13:31:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:46.685 13:31:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:46.685 13:31:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:46.685 13:31:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:46.685 13:31:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:46.685 13:31:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:46.685 13:31:39 -- spdk/autotest.sh@32 -- # uname -s 00:03:46.685 13:31:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:46.685 13:31:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:46.685 13:31:39 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:46.685 13:31:39 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:46.685 13:31:39 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:46.685 13:31:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:46.685 13:31:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:46.685 13:31:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:46.685 13:31:39 -- spdk/autotest.sh@48 -- # udevadm_pid=3372941 00:03:46.685 13:31:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:46.685 13:31:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:46.685 13:31:39 -- pm/common@17 -- # local monitor 00:03:46.685 13:31:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.685 13:31:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.685 13:31:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.685 13:31:39 -- pm/common@21 -- # date +%s 00:03:46.685 13:31:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:46.685 13:31:39 -- pm/common@21 -- # date +%s 00:03:46.685 13:31:39 -- pm/common@25 -- # sleep 1 00:03:46.685 13:31:39 -- pm/common@21 -- # date +%s 00:03:46.685 13:31:39 -- pm/common@21 -- # date +%s 00:03:46.685 13:31:39 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105499 00:03:46.685 13:31:39 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105499 00:03:46.685 13:31:39 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105499 00:03:46.685 13:31:39 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1718105499 00:03:46.943 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105499_collect-vmstat.pm.log 00:03:46.943 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105499_collect-cpu-load.pm.log 00:03:46.943 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105499_collect-cpu-temp.pm.log 00:03:46.943 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1718105499_collect-bmc-pm.bmc.pm.log 00:03:47.880 13:31:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:47.880 13:31:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:47.880 13:31:40 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:47.880 13:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:47.880 13:31:40 -- spdk/autotest.sh@59 -- # create_test_list 00:03:47.880 13:31:40 -- common/autotest_common.sh@747 -- # xtrace_disable 00:03:47.880 13:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:47.880 13:31:40 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:47.880 13:31:40 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:47.880 13:31:40 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:47.880 13:31:40 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:47.880 13:31:40 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:47.880 13:31:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:47.880 13:31:40 -- common/autotest_common.sh@1454 -- # uname 00:03:47.880 13:31:40 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:03:47.880 13:31:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:47.880 13:31:40 -- common/autotest_common.sh@1474 -- # uname 00:03:47.880 13:31:40 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:03:47.880 13:31:40 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:47.880 13:31:40 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:47.880 13:31:40 -- spdk/autotest.sh@72 -- # hash lcov 00:03:47.880 13:31:40 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:03:47.880 13:31:40 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:47.880 13:31:40 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:47.880 13:31:40 -- common/autotest_common.sh@10 -- # set +x 00:03:47.880 13:31:40 -- spdk/autotest.sh@91 -- # rm -f 00:03:47.880 13:31:40 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.169 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:03:51.169 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:51.169 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:51.169 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:51.169 13:31:44 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:51.169 13:31:44 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:51.169 13:31:44 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:51.169 13:31:44 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:51.169 13:31:44 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:51.169 13:31:44 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:51.169 13:31:44 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:51.169 13:31:44 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.169 13:31:44 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:51.169 13:31:44 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:51.169 13:31:44 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:03:51.169 13:31:44 -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:03:51.169 13:31:44 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:51.169 13:31:44 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:51.169 13:31:44 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:51.169 13:31:44 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n1 00:03:51.169 13:31:44 -- common/autotest_common.sh@1661 -- # local device=nvme2n1 00:03:51.169 13:31:44 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:51.169 13:31:44 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:51.169 13:31:44 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:51.169 13:31:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.169 13:31:44 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.169 13:31:44 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:51.169 13:31:44 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:51.169 13:31:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:51.429 No valid GPT data, bailing 00:03:51.429 13:31:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.429 13:31:44 -- scripts/common.sh@391 -- # pt= 00:03:51.429 13:31:44 -- scripts/common.sh@392 -- # return 1 00:03:51.429 13:31:44 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:51.429 1+0 records in 00:03:51.429 1+0 records out 00:03:51.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00582171 s, 180 MB/s 00:03:51.429 13:31:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.429 13:31:44 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.429 13:31:44 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:51.429 13:31:44 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:51.429 13:31:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:51.429 No valid GPT data, bailing 00:03:51.429 13:31:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:51.429 13:31:44 -- scripts/common.sh@391 -- # pt= 00:03:51.429 13:31:44 -- scripts/common.sh@392 -- # return 1 00:03:51.429 13:31:44 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:51.429 1+0 records in 00:03:51.429 1+0 records out 00:03:51.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0052537 s, 200 MB/s 00:03:51.429 13:31:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.429 13:31:44 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.429 13:31:44 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:03:51.429 13:31:44 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:03:51.429 13:31:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:51.429 No valid GPT data, bailing 00:03:51.429 13:31:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:51.429 13:31:44 -- scripts/common.sh@391 -- # pt= 00:03:51.429 13:31:44 -- scripts/common.sh@392 -- # return 1 00:03:51.429 13:31:44 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:51.429 1+0 records in 00:03:51.429 1+0 records out 00:03:51.429 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420617 s, 249 MB/s 00:03:51.429 13:31:44 -- spdk/autotest.sh@118 -- # sync 00:03:51.429 13:31:44 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:51.429 13:31:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:51.429 13:31:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.700 13:31:48 -- spdk/autotest.sh@124 -- # uname -s 00:03:56.701 13:31:48 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:56.701 13:31:48 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:56.701 13:31:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:56.701 13:31:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:56.701 13:31:48 -- common/autotest_common.sh@10 -- # set +x 00:03:56.701 ************************************ 00:03:56.701 START TEST setup.sh 00:03:56.701 ************************************ 00:03:56.701 13:31:49 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:56.701 * Looking for test storage... 00:03:56.701 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:56.701 13:31:49 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:56.701 13:31:49 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:56.701 13:31:49 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:56.701 13:31:49 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:56.701 13:31:49 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:56.701 13:31:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:56.701 ************************************ 00:03:56.701 START TEST acl 00:03:56.701 ************************************ 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:56.701 * Looking for test storage... 00:03:56.701 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:56.701 13:31:49 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n1 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme2n1 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:56.701 13:31:49 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:56.701 13:31:49 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:56.701 13:31:49 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:56.701 13:31:49 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:56.701 13:31:49 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:56.701 13:31:49 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:56.701 13:31:49 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.701 13:31:49 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.988 13:31:52 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:59.988 13:31:52 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:59.988 13:31:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.988 13:31:52 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:59.988 13:31:52 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.988 13:31:52 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:02.522 Hugepages 00:04:02.522 node hugesize free / total 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.522 00:04:02.522 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.522 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.781 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:02.782 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.041 13:31:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:04:03.041 13:31:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:03.041 13:31:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:03.041 13:31:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:03.041 13:31:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:03.041 13:31:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.041 13:31:55 setup.sh.acl -- setup/acl.sh@24 -- # (( 3 > 0 )) 00:04:03.041 13:31:55 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:03.041 13:31:55 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:03.041 13:31:55 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:03.041 13:31:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:03.041 ************************************ 00:04:03.041 START TEST denied 00:04:03.041 ************************************ 00:04:03.041 13:31:55 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:04:03.041 13:31:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:04:03.041 13:31:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:03.041 13:31:55 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:04:03.041 13:31:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.041 13:31:55 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:07.230 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:04:07.230 13:31:59 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:04:07.230 13:31:59 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:07.230 13:31:59 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:07.230 13:31:59 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:04:07.230 13:31:59 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:04:07.230 13:31:59 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:07.230 13:31:59 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:07.230 13:31:59 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:07.230 13:31:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.230 13:31:59 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:11.490 00:04:11.490 real 0m8.294s 00:04:11.490 user 0m2.322s 00:04:11.490 sys 0m4.186s 00:04:11.490 13:32:04 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:11.490 13:32:04 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:11.490 ************************************ 00:04:11.490 END TEST denied 00:04:11.490 ************************************ 00:04:11.490 13:32:04 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:11.490 13:32:04 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:11.490 13:32:04 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:11.490 13:32:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:11.490 ************************************ 00:04:11.490 START TEST allowed 00:04:11.490 ************************************ 00:04:11.490 13:32:04 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:04:11.490 13:32:04 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:04:11.490 13:32:04 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:11.490 13:32:04 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:04:11.490 13:32:04 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.490 13:32:04 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:15.681 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:15.681 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:5f:00.0 0000:d8:00.0 00:04:15.681 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:15.681 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.682 13:32:08 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:18.972 00:04:18.972 real 0m7.261s 00:04:18.972 user 0m2.362s 00:04:18.972 sys 0m4.115s 00:04:18.972 13:32:11 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:18.972 13:32:11 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:18.972 ************************************ 00:04:18.972 END TEST allowed 00:04:18.972 ************************************ 00:04:18.972 00:04:18.972 real 0m22.226s 00:04:18.972 user 0m7.211s 00:04:18.972 sys 0m12.662s 00:04:18.972 13:32:11 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:18.972 13:32:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:18.972 ************************************ 00:04:18.972 END TEST acl 00:04:18.972 ************************************ 00:04:18.972 13:32:11 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:18.972 13:32:11 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:18.972 13:32:11 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:18.972 13:32:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:18.972 ************************************ 00:04:18.972 START TEST hugepages 00:04:18.972 ************************************ 00:04:18.972 13:32:11 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:04:18.972 * Looking for test storage... 00:04:18.972 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 68703776 kB' 'MemAvailable: 72538428 kB' 'Buffers: 8892 kB' 'Cached: 15686852 kB' 'SwapCached: 0 kB' 'Active: 12680512 kB' 'Inactive: 3653308 kB' 'Active(anon): 12138336 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641472 kB' 'Mapped: 182668 kB' 'Shmem: 11500260 kB' 'KReclaimable: 475360 kB' 'Slab: 989352 kB' 'SReclaimable: 475360 kB' 'SUnreclaim: 513992 kB' 'KernelStack: 19232 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52437912 kB' 'Committed_AS: 13629892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211792 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.972 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.973 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:18.974 13:32:11 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:18.974 13:32:11 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:18.974 13:32:11 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:18.974 13:32:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.974 ************************************ 00:04:18.974 START TEST default_setup 00:04:18.974 ************************************ 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.974 13:32:11 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:22.264 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:22.264 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.833 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:22.833 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:22.833 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:22.833 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 70967336 kB' 'MemAvailable: 74800580 kB' 'Buffers: 8892 kB' 'Cached: 15686964 kB' 'SwapCached: 0 kB' 'Active: 12701516 kB' 'Inactive: 3653308 kB' 'Active(anon): 12159340 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661996 kB' 'Mapped: 182800 kB' 'Shmem: 11500372 kB' 'KReclaimable: 473952 kB' 'Slab: 983068 kB' 'SReclaimable: 473952 kB' 'SUnreclaim: 509116 kB' 'KernelStack: 19568 kB' 'PageTables: 9724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13662540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212160 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.834 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.098 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.099 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 70967832 kB' 'MemAvailable: 74801076 kB' 'Buffers: 8892 kB' 'Cached: 15686968 kB' 'SwapCached: 0 kB' 'Active: 12701064 kB' 'Inactive: 3653308 kB' 'Active(anon): 12158888 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661808 kB' 'Mapped: 182792 kB' 'Shmem: 11500376 kB' 'KReclaimable: 473952 kB' 'Slab: 983124 kB' 'SReclaimable: 473952 kB' 'SUnreclaim: 509172 kB' 'KernelStack: 19536 kB' 'PageTables: 9356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13662556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212096 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.100 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.101 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 70970024 kB' 'MemAvailable: 74803268 kB' 'Buffers: 8892 kB' 'Cached: 15686988 kB' 'SwapCached: 0 kB' 'Active: 12701296 kB' 'Inactive: 3653308 kB' 'Active(anon): 12159120 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662008 kB' 'Mapped: 182792 kB' 'Shmem: 11500396 kB' 'KReclaimable: 473952 kB' 'Slab: 983248 kB' 'SReclaimable: 473952 kB' 'SUnreclaim: 509296 kB' 'KernelStack: 19504 kB' 'PageTables: 9632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13662580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212144 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.102 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.103 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.104 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.105 nr_hugepages=1024 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.105 resv_hugepages=0 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.105 surplus_hugepages=0 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.105 anon_hugepages=0 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 70970204 kB' 'MemAvailable: 74803448 kB' 'Buffers: 8892 kB' 'Cached: 15687008 kB' 'SwapCached: 0 kB' 'Active: 12700876 kB' 'Inactive: 3653308 kB' 'Active(anon): 12158700 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661536 kB' 'Mapped: 182792 kB' 'Shmem: 11500416 kB' 'KReclaimable: 473952 kB' 'Slab: 983248 kB' 'SReclaimable: 473952 kB' 'SUnreclaim: 509296 kB' 'KernelStack: 19376 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13661124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212064 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.105 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.106 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.107 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 32857896 kB' 'MemUsed: 15260628 kB' 'SwapCached: 0 kB' 'Active: 8469212 kB' 'Inactive: 3471696 kB' 'Active(anon): 8090320 kB' 'Inactive(anon): 0 kB' 'Active(file): 378892 kB' 'Inactive(file): 3471696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11407888 kB' 'Mapped: 162952 kB' 'AnonPages: 536208 kB' 'Shmem: 7557300 kB' 'KernelStack: 11528 kB' 'PageTables: 6624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 225296 kB' 'Slab: 506480 kB' 'SReclaimable: 225296 kB' 'SUnreclaim: 281184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.108 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.109 node0=1024 expecting 1024 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.109 00:04:23.109 real 0m4.269s 00:04:23.109 user 0m1.353s 00:04:23.109 sys 0m2.023s 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:23.109 13:32:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:23.109 ************************************ 00:04:23.109 END TEST default_setup 00:04:23.109 ************************************ 00:04:23.109 13:32:15 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:23.109 13:32:15 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:23.109 13:32:15 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:23.109 13:32:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.109 ************************************ 00:04:23.109 START TEST per_node_1G_alloc 00:04:23.109 ************************************ 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.109 13:32:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:25.647 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.647 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.647 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.647 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.647 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71040560 kB' 'MemAvailable: 74873804 kB' 'Buffers: 8892 kB' 'Cached: 15687092 kB' 'SwapCached: 0 kB' 'Active: 12701884 kB' 'Inactive: 3653308 kB' 'Active(anon): 12159708 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662368 kB' 'Mapped: 183224 kB' 'Shmem: 11500500 kB' 'KReclaimable: 473952 kB' 'Slab: 983568 kB' 'SReclaimable: 473952 kB' 'SUnreclaim: 509616 kB' 'KernelStack: 19456 kB' 'PageTables: 9300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13662928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212176 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.648 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.649 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71041076 kB' 'MemAvailable: 74874320 kB' 'Buffers: 8892 kB' 'Cached: 15687092 kB' 'SwapCached: 0 kB' 'Active: 12703096 kB' 'Inactive: 3653308 kB' 'Active(anon): 12160920 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663604 kB' 'Mapped: 183224 kB' 'Shmem: 11500500 kB' 'KReclaimable: 473952 kB' 'Slab: 983548 kB' 'SReclaimable: 473952 kB' 'SUnreclaim: 509596 kB' 'KernelStack: 19664 kB' 'PageTables: 9660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13661468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212208 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.913 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.914 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71042532 kB' 'MemAvailable: 74875776 kB' 'Buffers: 8892 kB' 'Cached: 15687092 kB' 'SwapCached: 0 kB' 'Active: 12702772 kB' 'Inactive: 3653308 kB' 'Active(anon): 12160596 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663280 kB' 'Mapped: 183164 kB' 'Shmem: 11500500 kB' 'KReclaimable: 473952 kB' 'Slab: 983548 kB' 'SReclaimable: 473952 kB' 'SUnreclaim: 509596 kB' 'KernelStack: 19568 kB' 'PageTables: 9480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13661492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212112 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.915 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.916 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.917 nr_hugepages=1024 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.917 resv_hugepages=0 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.917 surplus_hugepages=0 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.917 anon_hugepages=0 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71043560 kB' 'MemAvailable: 74876804 kB' 'Buffers: 8892 kB' 'Cached: 15687136 kB' 'SwapCached: 0 kB' 'Active: 12701720 kB' 'Inactive: 3653308 kB' 'Active(anon): 12159544 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662228 kB' 'Mapped: 182808 kB' 'Shmem: 11500544 kB' 'KReclaimable: 473952 kB' 'Slab: 983584 kB' 'SReclaimable: 473952 kB' 'SUnreclaim: 509632 kB' 'KernelStack: 19584 kB' 'PageTables: 9352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13662624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212128 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.917 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.918 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.919 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 33917176 kB' 'MemUsed: 14201348 kB' 'SwapCached: 0 kB' 'Active: 8468816 kB' 'Inactive: 3471696 kB' 'Active(anon): 8089924 kB' 'Inactive(anon): 0 kB' 'Active(file): 378892 kB' 'Inactive(file): 3471696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11407980 kB' 'Mapped: 162968 kB' 'AnonPages: 535704 kB' 'Shmem: 7557392 kB' 'KernelStack: 11688 kB' 'PageTables: 6968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 225296 kB' 'Slab: 507204 kB' 'SReclaimable: 225296 kB' 'SUnreclaim: 281908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.920 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44174400 kB' 'MemFree: 37122940 kB' 'MemUsed: 7051460 kB' 'SwapCached: 0 kB' 'Active: 4232860 kB' 'Inactive: 181612 kB' 'Active(anon): 4069576 kB' 'Inactive(anon): 0 kB' 'Active(file): 163284 kB' 'Inactive(file): 181612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4288056 kB' 'Mapped: 19840 kB' 'AnonPages: 126488 kB' 'Shmem: 3943160 kB' 'KernelStack: 7816 kB' 'PageTables: 2416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 248656 kB' 'Slab: 476316 kB' 'SReclaimable: 248656 kB' 'SUnreclaim: 227660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.921 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.922 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:25.923 node0=512 expecting 512 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:25.923 node1=512 expecting 512 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:25.923 00:04:25.923 real 0m2.757s 00:04:25.923 user 0m1.039s 00:04:25.923 sys 0m1.682s 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:25.923 13:32:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.923 ************************************ 00:04:25.923 END TEST per_node_1G_alloc 00:04:25.923 ************************************ 00:04:25.923 13:32:18 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:25.923 13:32:18 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:25.923 13:32:18 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:25.923 13:32:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.923 ************************************ 00:04:25.923 START TEST even_2G_alloc 00:04:25.923 ************************************ 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.923 13:32:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:29.217 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.217 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.217 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.217 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.217 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.217 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.217 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.217 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.217 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.217 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.217 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.218 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.218 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.218 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.218 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.218 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.218 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.218 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.218 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71101772 kB' 'MemAvailable: 74935008 kB' 'Buffers: 8892 kB' 'Cached: 15687252 kB' 'SwapCached: 0 kB' 'Active: 12698084 kB' 'Inactive: 3653308 kB' 'Active(anon): 12155908 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658568 kB' 'Mapped: 181720 kB' 'Shmem: 11500660 kB' 'KReclaimable: 473944 kB' 'Slab: 982980 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509036 kB' 'KernelStack: 19232 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13638800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211936 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.218 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71103900 kB' 'MemAvailable: 74937136 kB' 'Buffers: 8892 kB' 'Cached: 15687256 kB' 'SwapCached: 0 kB' 'Active: 12697844 kB' 'Inactive: 3653308 kB' 'Active(anon): 12155668 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658376 kB' 'Mapped: 181712 kB' 'Shmem: 11500664 kB' 'KReclaimable: 473944 kB' 'Slab: 982964 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509020 kB' 'KernelStack: 19232 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13638816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211920 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.219 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.220 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71104884 kB' 'MemAvailable: 74938120 kB' 'Buffers: 8892 kB' 'Cached: 15687272 kB' 'SwapCached: 0 kB' 'Active: 12697688 kB' 'Inactive: 3653308 kB' 'Active(anon): 12155512 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658168 kB' 'Mapped: 181712 kB' 'Shmem: 11500680 kB' 'KReclaimable: 473944 kB' 'Slab: 982948 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509004 kB' 'KernelStack: 19216 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13638840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211920 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.221 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.222 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.223 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.224 nr_hugepages=1024 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.224 resv_hugepages=0 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.224 surplus_hugepages=0 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.224 anon_hugepages=0 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71104884 kB' 'MemAvailable: 74938120 kB' 'Buffers: 8892 kB' 'Cached: 15687292 kB' 'SwapCached: 0 kB' 'Active: 12697996 kB' 'Inactive: 3653308 kB' 'Active(anon): 12155820 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658456 kB' 'Mapped: 181712 kB' 'Shmem: 11500700 kB' 'KReclaimable: 473944 kB' 'Slab: 982960 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509016 kB' 'KernelStack: 19216 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13638860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211920 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.224 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.225 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 33959752 kB' 'MemUsed: 14158772 kB' 'SwapCached: 0 kB' 'Active: 8466180 kB' 'Inactive: 3471696 kB' 'Active(anon): 8087288 kB' 'Inactive(anon): 0 kB' 'Active(file): 378892 kB' 'Inactive(file): 3471696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11408132 kB' 'Mapped: 162384 kB' 'AnonPages: 533048 kB' 'Shmem: 7557544 kB' 'KernelStack: 11400 kB' 'PageTables: 6084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 225288 kB' 'Slab: 506704 kB' 'SReclaimable: 225288 kB' 'SUnreclaim: 281416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.226 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.227 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44174400 kB' 'MemFree: 37144880 kB' 'MemUsed: 7029520 kB' 'SwapCached: 0 kB' 'Active: 4231360 kB' 'Inactive: 181612 kB' 'Active(anon): 4068076 kB' 'Inactive(anon): 0 kB' 'Active(file): 163284 kB' 'Inactive(file): 181612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4288092 kB' 'Mapped: 19328 kB' 'AnonPages: 124904 kB' 'Shmem: 3943196 kB' 'KernelStack: 7816 kB' 'PageTables: 2428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 248656 kB' 'Slab: 476256 kB' 'SReclaimable: 248656 kB' 'SUnreclaim: 227600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.228 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.229 node0=512 expecting 512 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:29.229 node1=512 expecting 512 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:29.229 00:04:29.229 real 0m2.925s 00:04:29.229 user 0m1.185s 00:04:29.229 sys 0m1.806s 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:29.229 13:32:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.229 ************************************ 00:04:29.229 END TEST even_2G_alloc 00:04:29.229 ************************************ 00:04:29.229 13:32:21 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:29.229 13:32:21 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:29.229 13:32:21 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:29.229 13:32:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.229 ************************************ 00:04:29.229 START TEST odd_alloc 00:04:29.229 ************************************ 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.229 13:32:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:31.769 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:31.769 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:31.769 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:31.769 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71123276 kB' 'MemAvailable: 74956512 kB' 'Buffers: 8892 kB' 'Cached: 15687408 kB' 'SwapCached: 0 kB' 'Active: 12699324 kB' 'Inactive: 3653308 kB' 'Active(anon): 12157148 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659724 kB' 'Mapped: 181732 kB' 'Shmem: 11500816 kB' 'KReclaimable: 473944 kB' 'Slab: 982696 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 508752 kB' 'KernelStack: 19216 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485464 kB' 'Committed_AS: 13639512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211920 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.769 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.770 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71123624 kB' 'MemAvailable: 74956860 kB' 'Buffers: 8892 kB' 'Cached: 15687412 kB' 'SwapCached: 0 kB' 'Active: 12698288 kB' 'Inactive: 3653308 kB' 'Active(anon): 12156112 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658632 kB' 'Mapped: 181724 kB' 'Shmem: 11500820 kB' 'KReclaimable: 473944 kB' 'Slab: 982660 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 508716 kB' 'KernelStack: 19232 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485464 kB' 'Committed_AS: 13639528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211888 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.771 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.772 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71122116 kB' 'MemAvailable: 74955352 kB' 'Buffers: 8892 kB' 'Cached: 15687428 kB' 'SwapCached: 0 kB' 'Active: 12698444 kB' 'Inactive: 3653308 kB' 'Active(anon): 12156268 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658820 kB' 'Mapped: 181724 kB' 'Shmem: 11500836 kB' 'KReclaimable: 473944 kB' 'Slab: 982660 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 508716 kB' 'KernelStack: 19232 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485464 kB' 'Committed_AS: 13639548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211888 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.773 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.774 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.775 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:32.037 nr_hugepages=1025 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.037 resv_hugepages=0 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.037 surplus_hugepages=0 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.037 anon_hugepages=0 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71121360 kB' 'MemAvailable: 74954596 kB' 'Buffers: 8892 kB' 'Cached: 15687448 kB' 'SwapCached: 0 kB' 'Active: 12698476 kB' 'Inactive: 3653308 kB' 'Active(anon): 12156300 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658824 kB' 'Mapped: 181724 kB' 'Shmem: 11500856 kB' 'KReclaimable: 473944 kB' 'Slab: 982660 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 508716 kB' 'KernelStack: 19232 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485464 kB' 'Committed_AS: 13639572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211888 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.037 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.038 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 33974160 kB' 'MemUsed: 14144364 kB' 'SwapCached: 0 kB' 'Active: 8467120 kB' 'Inactive: 3471696 kB' 'Active(anon): 8088228 kB' 'Inactive(anon): 0 kB' 'Active(file): 378892 kB' 'Inactive(file): 3471696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11408232 kB' 'Mapped: 162396 kB' 'AnonPages: 533876 kB' 'Shmem: 7557644 kB' 'KernelStack: 11432 kB' 'PageTables: 6236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 225288 kB' 'Slab: 506284 kB' 'SReclaimable: 225288 kB' 'SUnreclaim: 280996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.039 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:32.040 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44174400 kB' 'MemFree: 37148780 kB' 'MemUsed: 7025620 kB' 'SwapCached: 0 kB' 'Active: 4231412 kB' 'Inactive: 181612 kB' 'Active(anon): 4068128 kB' 'Inactive(anon): 0 kB' 'Active(file): 163284 kB' 'Inactive(file): 181612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4288128 kB' 'Mapped: 19328 kB' 'AnonPages: 124948 kB' 'Shmem: 3943232 kB' 'KernelStack: 7800 kB' 'PageTables: 2340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 248656 kB' 'Slab: 476376 kB' 'SReclaimable: 248656 kB' 'SUnreclaim: 227720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.041 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:32.042 node0=512 expecting 513 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:32.042 node1=513 expecting 512 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:32.042 00:04:32.042 real 0m3.005s 00:04:32.042 user 0m1.221s 00:04:32.042 sys 0m1.858s 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:32.042 13:32:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.042 ************************************ 00:04:32.042 END TEST odd_alloc 00:04:32.042 ************************************ 00:04:32.042 13:32:24 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:32.042 13:32:24 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:32.042 13:32:24 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:32.042 13:32:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.042 ************************************ 00:04:32.042 START TEST custom_alloc 00:04:32.042 ************************************ 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.042 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.043 13:32:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:35.338 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.338 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.338 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.338 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.338 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 70076080 kB' 'MemAvailable: 73909316 kB' 'Buffers: 8892 kB' 'Cached: 15687556 kB' 'SwapCached: 0 kB' 'Active: 12699820 kB' 'Inactive: 3653308 kB' 'Active(anon): 12157644 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 659924 kB' 'Mapped: 181744 kB' 'Shmem: 11500964 kB' 'KReclaimable: 473944 kB' 'Slab: 983200 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509256 kB' 'KernelStack: 19328 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962200 kB' 'Committed_AS: 13640048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211936 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.339 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 70078656 kB' 'MemAvailable: 73911892 kB' 'Buffers: 8892 kB' 'Cached: 15687556 kB' 'SwapCached: 0 kB' 'Active: 12700452 kB' 'Inactive: 3653308 kB' 'Active(anon): 12158276 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660612 kB' 'Mapped: 181744 kB' 'Shmem: 11500964 kB' 'KReclaimable: 473944 kB' 'Slab: 983164 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509220 kB' 'KernelStack: 19312 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962200 kB' 'Committed_AS: 13639692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211872 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.340 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.341 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.342 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 70079316 kB' 'MemAvailable: 73912552 kB' 'Buffers: 8892 kB' 'Cached: 15687576 kB' 'SwapCached: 0 kB' 'Active: 12698540 kB' 'Inactive: 3653308 kB' 'Active(anon): 12156364 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658636 kB' 'Mapped: 181736 kB' 'Shmem: 11500984 kB' 'KReclaimable: 473944 kB' 'Slab: 983168 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509224 kB' 'KernelStack: 19232 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962200 kB' 'Committed_AS: 13639848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211856 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.343 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.344 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:35.345 nr_hugepages=1536 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.345 resv_hugepages=0 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.345 surplus_hugepages=0 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.345 anon_hugepages=0 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 70079316 kB' 'MemAvailable: 73912552 kB' 'Buffers: 8892 kB' 'Cached: 15687592 kB' 'SwapCached: 0 kB' 'Active: 12698800 kB' 'Inactive: 3653308 kB' 'Active(anon): 12156624 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 658880 kB' 'Mapped: 181736 kB' 'Shmem: 11501000 kB' 'KReclaimable: 473944 kB' 'Slab: 983168 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509224 kB' 'KernelStack: 19216 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962200 kB' 'Committed_AS: 13639872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211856 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.345 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.346 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 33967384 kB' 'MemUsed: 14151140 kB' 'SwapCached: 0 kB' 'Active: 8467824 kB' 'Inactive: 3471696 kB' 'Active(anon): 8088932 kB' 'Inactive(anon): 0 kB' 'Active(file): 378892 kB' 'Inactive(file): 3471696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11408360 kB' 'Mapped: 162408 kB' 'AnonPages: 534456 kB' 'Shmem: 7557772 kB' 'KernelStack: 11480 kB' 'PageTables: 6404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 225288 kB' 'Slab: 506824 kB' 'SReclaimable: 225288 kB' 'SUnreclaim: 281536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.347 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44174400 kB' 'MemFree: 36111680 kB' 'MemUsed: 8062720 kB' 'SwapCached: 0 kB' 'Active: 4231912 kB' 'Inactive: 181612 kB' 'Active(anon): 4068628 kB' 'Inactive(anon): 0 kB' 'Active(file): 163284 kB' 'Inactive(file): 181612 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4288156 kB' 'Mapped: 19328 kB' 'AnonPages: 125400 kB' 'Shmem: 3943260 kB' 'KernelStack: 7800 kB' 'PageTables: 2360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 248656 kB' 'Slab: 476344 kB' 'SReclaimable: 248656 kB' 'SUnreclaim: 227688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.348 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.349 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:35.350 node0=512 expecting 512 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:35.350 node1=1024 expecting 1024 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:35.350 00:04:35.350 real 0m3.003s 00:04:35.350 user 0m1.183s 00:04:35.350 sys 0m1.898s 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:35.350 13:32:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.350 ************************************ 00:04:35.350 END TEST custom_alloc 00:04:35.350 ************************************ 00:04:35.350 13:32:27 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:35.350 13:32:27 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:35.350 13:32:27 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:35.350 13:32:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.350 ************************************ 00:04:35.350 START TEST no_shrink_alloc 00:04:35.350 ************************************ 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.350 13:32:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:37.889 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:37.889 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:37.889 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:37.889 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:37.889 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:37.889 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:37.889 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:37.889 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:37.889 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:37.889 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:37.889 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:37.890 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:37.890 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:37.890 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:37.890 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:37.890 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:37.890 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:37.890 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:37.890 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71155420 kB' 'MemAvailable: 74988656 kB' 'Buffers: 8892 kB' 'Cached: 15687704 kB' 'SwapCached: 0 kB' 'Active: 12702492 kB' 'Inactive: 3653308 kB' 'Active(anon): 12160316 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662084 kB' 'Mapped: 181776 kB' 'Shmem: 11501112 kB' 'KReclaimable: 473944 kB' 'Slab: 983636 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509692 kB' 'KernelStack: 19376 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13643348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212064 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.890 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71155580 kB' 'MemAvailable: 74988816 kB' 'Buffers: 8892 kB' 'Cached: 15687708 kB' 'SwapCached: 0 kB' 'Active: 12702504 kB' 'Inactive: 3653308 kB' 'Active(anon): 12160328 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662532 kB' 'Mapped: 181836 kB' 'Shmem: 11501116 kB' 'KReclaimable: 473944 kB' 'Slab: 983712 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509768 kB' 'KernelStack: 19424 kB' 'PageTables: 9164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13641876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212016 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.891 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.892 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71156964 kB' 'MemAvailable: 74990200 kB' 'Buffers: 8892 kB' 'Cached: 15687728 kB' 'SwapCached: 0 kB' 'Active: 12702144 kB' 'Inactive: 3653308 kB' 'Active(anon): 12159968 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662120 kB' 'Mapped: 181756 kB' 'Shmem: 11501136 kB' 'KReclaimable: 473944 kB' 'Slab: 983768 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509824 kB' 'KernelStack: 19392 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13643388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212016 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.893 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.894 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.895 nr_hugepages=1024 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.895 resv_hugepages=0 00:04:37.895 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.895 surplus_hugepages=0 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.156 anon_hugepages=0 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71157904 kB' 'MemAvailable: 74991140 kB' 'Buffers: 8892 kB' 'Cached: 15687748 kB' 'SwapCached: 0 kB' 'Active: 12702624 kB' 'Inactive: 3653308 kB' 'Active(anon): 12160448 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662576 kB' 'Mapped: 181756 kB' 'Shmem: 11501156 kB' 'KReclaimable: 473944 kB' 'Slab: 983768 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509824 kB' 'KernelStack: 19552 kB' 'PageTables: 9508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13643408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212096 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.156 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.157 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 32934224 kB' 'MemUsed: 15184300 kB' 'SwapCached: 0 kB' 'Active: 8469128 kB' 'Inactive: 3471696 kB' 'Active(anon): 8090236 kB' 'Inactive(anon): 0 kB' 'Active(file): 378892 kB' 'Inactive(file): 3471696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11408488 kB' 'Mapped: 162420 kB' 'AnonPages: 535616 kB' 'Shmem: 7557900 kB' 'KernelStack: 11432 kB' 'PageTables: 6252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 225288 kB' 'Slab: 507236 kB' 'SReclaimable: 225288 kB' 'SUnreclaim: 281948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.158 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.159 node0=1024 expecting 1024 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.159 13:32:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:40.693 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:40.693 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:40.693 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:40.693 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:40.693 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71177168 kB' 'MemAvailable: 75010404 kB' 'Buffers: 8892 kB' 'Cached: 15687844 kB' 'SwapCached: 0 kB' 'Active: 12703144 kB' 'Inactive: 3653308 kB' 'Active(anon): 12160968 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662884 kB' 'Mapped: 181788 kB' 'Shmem: 11501252 kB' 'KReclaimable: 473944 kB' 'Slab: 983564 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509620 kB' 'KernelStack: 19504 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13644164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212144 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.957 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.958 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71179684 kB' 'MemAvailable: 75012920 kB' 'Buffers: 8892 kB' 'Cached: 15687848 kB' 'SwapCached: 0 kB' 'Active: 12702608 kB' 'Inactive: 3653308 kB' 'Active(anon): 12160432 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662368 kB' 'Mapped: 181764 kB' 'Shmem: 11501256 kB' 'KReclaimable: 473944 kB' 'Slab: 983532 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509588 kB' 'KernelStack: 19552 kB' 'PageTables: 9360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13644184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212112 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.959 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.960 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71177272 kB' 'MemAvailable: 75010508 kB' 'Buffers: 8892 kB' 'Cached: 15687864 kB' 'SwapCached: 0 kB' 'Active: 12702792 kB' 'Inactive: 3653308 kB' 'Active(anon): 12160616 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662508 kB' 'Mapped: 181764 kB' 'Shmem: 11501272 kB' 'KReclaimable: 473944 kB' 'Slab: 983472 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509528 kB' 'KernelStack: 19456 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13643956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 212128 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.961 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.962 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.963 nr_hugepages=1024 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.963 resv_hugepages=0 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.963 surplus_hugepages=0 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.963 anon_hugepages=0 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92292924 kB' 'MemFree: 71177072 kB' 'MemAvailable: 75010308 kB' 'Buffers: 8892 kB' 'Cached: 15687888 kB' 'SwapCached: 0 kB' 'Active: 12702392 kB' 'Inactive: 3653308 kB' 'Active(anon): 12160216 kB' 'Inactive(anon): 0 kB' 'Active(file): 542176 kB' 'Inactive(file): 3653308 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662608 kB' 'Mapped: 181764 kB' 'Shmem: 11501296 kB' 'KReclaimable: 473944 kB' 'Slab: 983472 kB' 'SReclaimable: 473944 kB' 'SUnreclaim: 509528 kB' 'KernelStack: 19344 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486488 kB' 'Committed_AS: 13641620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 211984 kB' 'VmallocChunk: 0 kB' 'Percpu: 82560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2515924 kB' 'DirectMap2M: 24426496 kB' 'DirectMap1G: 74448896 kB' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.963 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.964 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48118524 kB' 'MemFree: 32931420 kB' 'MemUsed: 15187104 kB' 'SwapCached: 0 kB' 'Active: 8469684 kB' 'Inactive: 3471696 kB' 'Active(anon): 8090792 kB' 'Inactive(anon): 0 kB' 'Active(file): 378892 kB' 'Inactive(file): 3471696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11408584 kB' 'Mapped: 162932 kB' 'AnonPages: 535956 kB' 'Shmem: 7557996 kB' 'KernelStack: 11512 kB' 'PageTables: 6488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 225288 kB' 'Slab: 506972 kB' 'SReclaimable: 225288 kB' 'SUnreclaim: 281684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.965 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.966 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:40.967 node0=1024 expecting 1024 00:04:40.967 13:32:33 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:40.967 00:04:40.967 real 0m5.884s 00:04:40.967 user 0m2.269s 00:04:40.967 sys 0m3.758s 00:04:40.967 13:32:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:40.967 13:32:33 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:40.967 ************************************ 00:04:40.967 END TEST no_shrink_alloc 00:04:40.967 ************************************ 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:40.967 13:32:33 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:40.967 00:04:40.967 real 0m22.363s 00:04:40.967 user 0m8.480s 00:04:40.967 sys 0m13.355s 00:04:40.967 13:32:33 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:40.967 13:32:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.967 ************************************ 00:04:40.967 END TEST hugepages 00:04:40.967 ************************************ 00:04:40.967 13:32:33 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:40.967 13:32:33 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:40.967 13:32:33 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:40.967 13:32:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:41.226 ************************************ 00:04:41.226 START TEST driver 00:04:41.226 ************************************ 00:04:41.226 13:32:33 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:41.226 * Looking for test storage... 00:04:41.226 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:41.226 13:32:33 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:41.226 13:32:33 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.226 13:32:33 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.483 13:32:37 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:45.483 13:32:37 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:45.483 13:32:37 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:45.483 13:32:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.483 ************************************ 00:04:45.483 START TEST guess_driver 00:04:45.483 ************************************ 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 165 > 0 )) 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:45.483 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:45.484 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:45.484 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:45.484 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:45.484 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:45.484 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:45.484 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:45.484 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:45.484 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:45.484 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:45.484 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:45.484 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:45.484 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:45.484 Looking for driver=vfio-pci 00:04:45.484 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:45.484 13:32:37 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:45.484 13:32:37 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.484 13:32:37 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:48.020 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.020 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.020 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.279 13:32:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.215 13:32:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.215 13:32:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.215 13:32:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.215 13:32:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.215 13:32:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.215 13:32:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.215 13:32:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.215 13:32:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.215 13:32:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.215 13:32:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:49.215 13:32:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:49.215 13:32:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.215 13:32:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.405 00:04:53.405 real 0m8.338s 00:04:53.405 user 0m2.413s 00:04:53.405 sys 0m4.202s 00:04:53.405 13:32:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.405 13:32:46 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.405 ************************************ 00:04:53.405 END TEST guess_driver 00:04:53.405 ************************************ 00:04:53.665 00:04:53.665 real 0m12.465s 00:04:53.665 user 0m3.496s 00:04:53.665 sys 0m6.359s 00:04:53.665 13:32:46 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.665 13:32:46 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.665 ************************************ 00:04:53.665 END TEST driver 00:04:53.665 ************************************ 00:04:53.665 13:32:46 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:53.665 13:32:46 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.665 13:32:46 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.665 13:32:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.665 ************************************ 00:04:53.665 START TEST devices 00:04:53.665 ************************************ 00:04:53.665 13:32:46 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:53.665 * Looking for test storage... 00:04:53.665 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:53.665 13:32:46 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:53.665 13:32:46 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:53.665 13:32:46 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.665 13:32:46 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme2n1 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme2n1 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:56.952 13:32:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:56.952 13:32:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:56.952 13:32:49 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:56.952 No valid GPT data, bailing 00:04:56.952 13:32:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:56.952 13:32:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:56.952 13:32:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:56.952 13:32:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:56.952 13:32:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:56.952 13:32:49 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:56.952 13:32:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:56.953 13:32:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:56.953 13:32:49 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:04:56.953 No valid GPT data, bailing 00:04:56.953 13:32:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:56.953 13:32:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:56.953 13:32:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:56.953 13:32:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:56.953 13:32:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:56.953 13:32:49 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:56.953 13:32:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:56.953 13:32:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:56.953 13:32:49 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:04:57.212 No valid GPT data, bailing 00:04:57.212 13:32:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:57.212 13:32:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:57.212 13:32:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:57.212 13:32:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:57.212 13:32:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:57.212 13:32:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:57.212 13:32:49 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:57.212 13:32:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:57.212 13:32:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:57.212 13:32:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:57.212 13:32:49 setup.sh.devices -- setup/devices.sh@209 -- # (( 3 > 0 )) 00:04:57.212 13:32:49 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:57.212 13:32:49 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:57.212 13:32:49 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:57.212 13:32:49 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:57.212 13:32:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:57.212 ************************************ 00:04:57.212 START TEST nvme_mount 00:04:57.212 ************************************ 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:57.212 13:32:49 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:58.149 Creating new GPT entries in memory. 00:04:58.149 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:58.149 other utilities. 00:04:58.149 13:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:58.149 13:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.149 13:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:58.149 13:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:58.149 13:32:50 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:59.086 Creating new GPT entries in memory. 00:04:59.086 The operation has completed successfully. 00:04:59.086 13:32:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:59.086 13:32:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.086 13:32:51 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3400748 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.345 13:32:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:01.879 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:02.138 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:02.138 13:32:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:02.396 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:02.396 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:02.396 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:02.396 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:02.396 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:02.396 13:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:02.396 13:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.396 13:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:02.396 13:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.397 13:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.931 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.191 13:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:07.727 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.727 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:07.727 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:07.727 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.727 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.727 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.987 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.247 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.247 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:08.247 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:08.247 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:08.247 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.247 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.247 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.247 13:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.247 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.247 00:05:08.247 real 0m11.003s 00:05:08.247 user 0m3.280s 00:05:08.247 sys 0m5.489s 00:05:08.247 13:33:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:08.247 13:33:00 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:08.247 ************************************ 00:05:08.247 END TEST nvme_mount 00:05:08.247 ************************************ 00:05:08.247 13:33:00 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:08.247 13:33:00 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:08.247 13:33:00 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:08.247 13:33:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:08.247 ************************************ 00:05:08.247 START TEST dm_mount 00:05:08.247 ************************************ 00:05:08.247 13:33:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:05:08.247 13:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:08.247 13:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:08.248 13:33:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:09.186 Creating new GPT entries in memory. 00:05:09.186 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:09.186 other utilities. 00:05:09.186 13:33:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:09.186 13:33:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.186 13:33:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:09.186 13:33:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:09.186 13:33:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:10.566 Creating new GPT entries in memory. 00:05:10.566 The operation has completed successfully. 00:05:10.566 13:33:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:10.566 13:33:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.566 13:33:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:10.566 13:33:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:10.566 13:33:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:11.504 The operation has completed successfully. 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3404983 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:11.504 13:33:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.505 13:33:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:14.041 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.041 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:14.041 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:14.041 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.041 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.041 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.041 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.041 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.041 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.041 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:14.042 13:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.301 13:33:07 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:16.839 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:16.839 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:16.840 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:16.840 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.840 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:16.840 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:17.099 13:33:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:17.359 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:17.359 00:05:17.359 real 0m9.194s 00:05:17.359 user 0m2.354s 00:05:17.359 sys 0m3.884s 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:17.359 13:33:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:17.359 ************************************ 00:05:17.359 END TEST dm_mount 00:05:17.359 ************************************ 00:05:17.359 13:33:10 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:17.359 13:33:10 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:17.359 13:33:10 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.359 13:33:10 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.359 13:33:10 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:17.617 13:33:10 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.617 13:33:10 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.875 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:17.875 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:17.875 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:17.875 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:17.875 13:33:10 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:17.875 13:33:10 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:17.875 13:33:10 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:17.875 13:33:10 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.875 13:33:10 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:17.875 13:33:10 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.875 13:33:10 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:17.875 00:05:17.875 real 0m24.135s 00:05:17.875 user 0m7.071s 00:05:17.875 sys 0m11.721s 00:05:17.875 13:33:10 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:17.875 13:33:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:17.875 ************************************ 00:05:17.875 END TEST devices 00:05:17.875 ************************************ 00:05:17.875 00:05:17.875 real 1m21.557s 00:05:17.875 user 0m26.383s 00:05:17.875 sys 0m44.366s 00:05:17.875 13:33:10 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:17.875 13:33:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.875 ************************************ 00:05:17.875 END TEST setup.sh 00:05:17.875 ************************************ 00:05:17.875 13:33:10 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:21.164 Hugepages 00:05:21.164 node hugesize free / total 00:05:21.164 node0 1048576kB 0 / 0 00:05:21.164 node0 2048kB 2048 / 2048 00:05:21.164 node1 1048576kB 0 / 0 00:05:21.164 node1 2048kB 0 / 0 00:05:21.164 00:05:21.164 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.164 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:21.164 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:21.164 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:21.164 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:21.164 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:21.164 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:21.164 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:21.164 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:21.164 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme1 nvme1n1 00:05:21.164 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:21.164 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:21.164 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:21.164 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:21.164 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:21.164 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:21.164 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:21.164 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:21.164 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:21.164 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme2 nvme2n1 00:05:21.164 13:33:13 -- spdk/autotest.sh@130 -- # uname -s 00:05:21.164 13:33:13 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:21.164 13:33:13 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:21.164 13:33:13 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:23.782 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:23.782 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:24.041 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:24.977 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:24.977 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:05:24.977 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:24.977 13:33:17 -- common/autotest_common.sh@1531 -- # sleep 1 00:05:26.355 13:33:18 -- common/autotest_common.sh@1532 -- # bdfs=() 00:05:26.355 13:33:18 -- common/autotest_common.sh@1532 -- # local bdfs 00:05:26.355 13:33:18 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.355 13:33:18 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:05:26.355 13:33:18 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:26.355 13:33:18 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:26.355 13:33:18 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.355 13:33:18 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:26.355 13:33:18 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:26.355 13:33:18 -- common/autotest_common.sh@1514 -- # (( 3 == 0 )) 00:05:26.355 13:33:18 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 00:05:26.355 13:33:18 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:28.891 Waiting for block devices as requested 00:05:28.891 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:05:28.891 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:28.891 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:29.150 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:29.150 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:29.150 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:29.150 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:29.408 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:29.408 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:29.408 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:29.667 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:29.667 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:29.667 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:29.926 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:29.926 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:29.926 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:29.926 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:30.186 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:30.186 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:30.186 13:33:23 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:30.186 13:33:23 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:30.186 13:33:23 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:05:30.186 13:33:23 -- common/autotest_common.sh@1501 -- # grep 0000:5e:00.0/nvme/nvme 00:05:30.186 13:33:23 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 00:05:30.186 13:33:23 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 ]] 00:05:30.186 13:33:23 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme1 00:05:30.186 13:33:23 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme1 00:05:30.186 13:33:23 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme1 00:05:30.186 13:33:23 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme1 ]] 00:05:30.186 13:33:23 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme1 00:05:30.186 13:33:23 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:30.186 13:33:23 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:30.186 13:33:23 -- common/autotest_common.sh@1544 -- # oacs=' 0xe' 00:05:30.186 13:33:23 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:30.186 13:33:23 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:30.186 13:33:23 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme1 00:05:30.186 13:33:23 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:30.186 13:33:23 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:30.186 13:33:23 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:30.186 13:33:23 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:30.186 13:33:23 -- common/autotest_common.sh@1556 -- # continue 00:05:30.186 13:33:23 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:30.186 13:33:23 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:05:30.186 13:33:23 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:05:30.186 13:33:23 -- common/autotest_common.sh@1501 -- # grep 0000:5f:00.0/nvme/nvme 00:05:30.186 13:33:23 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:05:30.186 13:33:23 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:05:30.186 13:33:23 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:05:30.445 13:33:23 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:05:30.445 13:33:23 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:05:30.445 13:33:23 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:05:30.445 13:33:23 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:05:30.445 13:33:23 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:30.445 13:33:23 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:30.445 13:33:23 -- common/autotest_common.sh@1544 -- # oacs=' 0xf' 00:05:30.445 13:33:23 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:30.445 13:33:23 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:30.445 13:33:23 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:05:30.445 13:33:23 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:30.445 13:33:23 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:30.445 13:33:23 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:30.445 13:33:23 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:30.445 13:33:23 -- common/autotest_common.sh@1556 -- # continue 00:05:30.445 13:33:23 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:05:30.445 13:33:23 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:30.445 13:33:23 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:05:30.445 13:33:23 -- common/autotest_common.sh@1501 -- # grep 0000:d8:00.0/nvme/nvme 00:05:30.445 13:33:23 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme2 00:05:30.445 13:33:23 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme2 ]] 00:05:30.445 13:33:23 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme2 00:05:30.445 13:33:23 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme2 00:05:30.445 13:33:23 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme2 00:05:30.446 13:33:23 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme2 ]] 00:05:30.446 13:33:23 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme2 00:05:30.446 13:33:23 -- common/autotest_common.sh@1544 -- # grep oacs 00:05:30.446 13:33:23 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:05:30.446 13:33:23 -- common/autotest_common.sh@1544 -- # oacs=' 0xf' 00:05:30.446 13:33:23 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:05:30.446 13:33:23 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:05:30.446 13:33:23 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme2 00:05:30.446 13:33:23 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:05:30.446 13:33:23 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:05:30.446 13:33:23 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:05:30.446 13:33:23 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:05:30.446 13:33:23 -- common/autotest_common.sh@1556 -- # continue 00:05:30.446 13:33:23 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:30.446 13:33:23 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:30.446 13:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:30.446 13:33:23 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:30.446 13:33:23 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:30.446 13:33:23 -- common/autotest_common.sh@10 -- # set +x 00:05:30.446 13:33:23 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:32.988 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:32.988 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:33.924 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:33.924 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:05:33.924 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:34.183 13:33:26 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:34.183 13:33:26 -- common/autotest_common.sh@729 -- # xtrace_disable 00:05:34.183 13:33:26 -- common/autotest_common.sh@10 -- # set +x 00:05:34.183 13:33:26 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:34.183 13:33:26 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:05:34.183 13:33:26 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:05:34.183 13:33:26 -- common/autotest_common.sh@1576 -- # bdfs=() 00:05:34.183 13:33:26 -- common/autotest_common.sh@1576 -- # local bdfs 00:05:34.183 13:33:26 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:05:34.183 13:33:26 -- common/autotest_common.sh@1512 -- # bdfs=() 00:05:34.183 13:33:26 -- common/autotest_common.sh@1512 -- # local bdfs 00:05:34.183 13:33:26 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.183 13:33:26 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:34.183 13:33:26 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:05:34.183 13:33:26 -- common/autotest_common.sh@1514 -- # (( 3 == 0 )) 00:05:34.183 13:33:26 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 00:05:34.183 13:33:27 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:34.183 13:33:27 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:34.183 13:33:27 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:05:34.183 13:33:27 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:34.183 13:33:27 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:05:34.183 13:33:27 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:34.183 13:33:27 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:05:34.183 13:33:27 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:05:34.183 13:33:27 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:34.183 13:33:27 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:05:34.183 13:33:27 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:05:34.183 13:33:27 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:34.183 13:33:27 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:05:34.183 13:33:27 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:34.183 13:33:27 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:05:34.183 13:33:27 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:5e:00.0 0000:5f:00.0 0000:d8:00.0 00:05:34.183 13:33:27 -- common/autotest_common.sh@1591 -- # [[ -z 0000:5e:00.0 ]] 00:05:34.183 13:33:27 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=3414619 00:05:34.183 13:33:27 -- common/autotest_common.sh@1597 -- # waitforlisten 3414619 00:05:34.183 13:33:27 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.183 13:33:27 -- common/autotest_common.sh@830 -- # '[' -z 3414619 ']' 00:05:34.183 13:33:27 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.183 13:33:27 -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:34.183 13:33:27 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.183 13:33:27 -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:34.183 13:33:27 -- common/autotest_common.sh@10 -- # set +x 00:05:34.183 [2024-06-11 13:33:27.045945] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:05:34.183 [2024-06-11 13:33:27.046014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414619 ] 00:05:34.183 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.441 [2024-06-11 13:33:27.123040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.441 [2024-06-11 13:33:27.217788] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.699 13:33:27 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:34.699 13:33:27 -- common/autotest_common.sh@863 -- # return 0 00:05:34.699 13:33:27 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:05:34.699 13:33:27 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:05:34.699 13:33:27 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:37.988 nvme0n1 00:05:37.988 13:33:30 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:37.988 [2024-06-11 13:33:30.758321] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:37.988 request: 00:05:37.988 { 00:05:37.988 "nvme_ctrlr_name": "nvme0", 00:05:37.988 "password": "test", 00:05:37.988 "method": "bdev_nvme_opal_revert", 00:05:37.988 "req_id": 1 00:05:37.988 } 00:05:37.988 Got JSON-RPC error response 00:05:37.988 response: 00:05:37.988 { 00:05:37.988 "code": -32602, 00:05:37.988 "message": "Invalid parameters" 00:05:37.988 } 00:05:37.988 13:33:30 -- common/autotest_common.sh@1603 -- # true 00:05:37.988 13:33:30 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:05:37.988 13:33:30 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:05:37.988 13:33:30 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:5f:00.0 00:05:41.274 nvme1n1 00:05:41.274 13:33:33 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:05:41.274 [2024-06-11 13:33:34.098610] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:41.274 [2024-06-11 13:33:34.098648] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:41.274 request: 00:05:41.274 { 00:05:41.274 "nvme_ctrlr_name": "nvme1", 00:05:41.274 "password": "test", 00:05:41.274 "method": "bdev_nvme_opal_revert", 00:05:41.274 "req_id": 1 00:05:41.274 } 00:05:41.274 Got JSON-RPC error response 00:05:41.274 response: 00:05:41.274 { 00:05:41.274 "code": -32603, 00:05:41.274 "message": "Internal error" 00:05:41.274 } 00:05:41.274 13:33:34 -- common/autotest_common.sh@1603 -- # true 00:05:41.274 13:33:34 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:05:41.274 13:33:34 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:05:41.274 13:33:34 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme2 -t pcie -a 0000:d8:00.0 00:05:44.563 nvme2n1 00:05:44.563 13:33:37 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme2 -p test 00:05:44.563 [2024-06-11 13:33:37.433947] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:44.563 [2024-06-11 13:33:37.433984] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:44.563 request: 00:05:44.563 { 00:05:44.563 "nvme_ctrlr_name": "nvme2", 00:05:44.563 "password": "test", 00:05:44.563 "method": "bdev_nvme_opal_revert", 00:05:44.563 "req_id": 1 00:05:44.563 } 00:05:44.563 Got JSON-RPC error response 00:05:44.563 response: 00:05:44.563 { 00:05:44.563 "code": -32603, 00:05:44.563 "message": "Internal error" 00:05:44.563 } 00:05:44.563 13:33:37 -- common/autotest_common.sh@1603 -- # true 00:05:44.563 13:33:37 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:05:44.563 13:33:37 -- common/autotest_common.sh@1607 -- # killprocess 3414619 00:05:44.563 13:33:37 -- common/autotest_common.sh@949 -- # '[' -z 3414619 ']' 00:05:44.563 13:33:37 -- common/autotest_common.sh@953 -- # kill -0 3414619 00:05:44.563 13:33:37 -- common/autotest_common.sh@954 -- # uname 00:05:44.563 13:33:37 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:44.563 13:33:37 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3414619 00:05:44.822 13:33:37 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:44.822 13:33:37 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:44.822 13:33:37 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3414619' 00:05:44.822 killing process with pid 3414619 00:05:44.822 13:33:37 -- common/autotest_common.sh@968 -- # kill 3414619 00:05:44.822 13:33:37 -- common/autotest_common.sh@973 -- # wait 3414619 00:05:47.356 13:33:39 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:47.356 13:33:39 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:47.356 13:33:39 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:47.356 13:33:39 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:47.356 13:33:39 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:47.356 13:33:39 -- common/autotest_common.sh@723 -- # xtrace_disable 00:05:47.356 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:05:47.356 13:33:39 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:47.356 13:33:39 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:47.356 13:33:39 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:47.356 13:33:39 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.356 13:33:39 -- common/autotest_common.sh@10 -- # set +x 00:05:47.356 ************************************ 00:05:47.356 START TEST env 00:05:47.356 ************************************ 00:05:47.356 13:33:39 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:47.356 * Looking for test storage... 00:05:47.356 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:47.356 13:33:39 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:47.356 13:33:39 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:47.356 13:33:39 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.356 13:33:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.356 ************************************ 00:05:47.356 START TEST env_memory 00:05:47.356 ************************************ 00:05:47.356 13:33:40 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:47.356 00:05:47.356 00:05:47.356 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.356 http://cunit.sourceforge.net/ 00:05:47.356 00:05:47.356 00:05:47.356 Suite: memory 00:05:47.356 Test: alloc and free memory map ...[2024-06-11 13:33:40.069727] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:47.356 passed 00:05:47.356 Test: mem map translation ...[2024-06-11 13:33:40.091293] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:47.356 [2024-06-11 13:33:40.091315] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:47.356 [2024-06-11 13:33:40.091364] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:47.356 [2024-06-11 13:33:40.091381] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:47.356 passed 00:05:47.356 Test: mem map registration ...[2024-06-11 13:33:40.130119] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:47.356 [2024-06-11 13:33:40.130139] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:47.356 passed 00:05:47.356 Test: mem map adjacent registrations ...passed 00:05:47.356 00:05:47.356 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.356 suites 1 1 n/a 0 0 00:05:47.356 tests 4 4 4 0 0 00:05:47.356 asserts 152 152 152 0 n/a 00:05:47.356 00:05:47.357 Elapsed time = 0.139 seconds 00:05:47.357 00:05:47.357 real 0m0.150s 00:05:47.357 user 0m0.136s 00:05:47.357 sys 0m0.014s 00:05:47.357 13:33:40 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.357 13:33:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:47.357 ************************************ 00:05:47.357 END TEST env_memory 00:05:47.357 ************************************ 00:05:47.357 13:33:40 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:47.357 13:33:40 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:47.357 13:33:40 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.357 13:33:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.357 ************************************ 00:05:47.357 START TEST env_vtophys 00:05:47.357 ************************************ 00:05:47.357 13:33:40 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:47.357 EAL: lib.eal log level changed from notice to debug 00:05:47.357 EAL: Detected lcore 0 as core 0 on socket 0 00:05:47.357 EAL: Detected lcore 1 as core 1 on socket 0 00:05:47.357 EAL: Detected lcore 2 as core 2 on socket 0 00:05:47.357 EAL: Detected lcore 3 as core 3 on socket 0 00:05:47.357 EAL: Detected lcore 4 as core 4 on socket 0 00:05:47.357 EAL: Detected lcore 5 as core 5 on socket 0 00:05:47.357 EAL: Detected lcore 6 as core 8 on socket 0 00:05:47.357 EAL: Detected lcore 7 as core 9 on socket 0 00:05:47.357 EAL: Detected lcore 8 as core 10 on socket 0 00:05:47.357 EAL: Detected lcore 9 as core 11 on socket 0 00:05:47.357 EAL: Detected lcore 10 as core 12 on socket 0 00:05:47.357 EAL: Detected lcore 11 as core 16 on socket 0 00:05:47.357 EAL: Detected lcore 12 as core 17 on socket 0 00:05:47.357 EAL: Detected lcore 13 as core 18 on socket 0 00:05:47.357 EAL: Detected lcore 14 as core 19 on socket 0 00:05:47.357 EAL: Detected lcore 15 as core 20 on socket 0 00:05:47.357 EAL: Detected lcore 16 as core 21 on socket 0 00:05:47.357 EAL: Detected lcore 17 as core 24 on socket 0 00:05:47.357 EAL: Detected lcore 18 as core 25 on socket 0 00:05:47.357 EAL: Detected lcore 19 as core 26 on socket 0 00:05:47.357 EAL: Detected lcore 20 as core 27 on socket 0 00:05:47.357 EAL: Detected lcore 21 as core 28 on socket 0 00:05:47.357 EAL: Detected lcore 22 as core 0 on socket 1 00:05:47.357 EAL: Detected lcore 23 as core 1 on socket 1 00:05:47.357 EAL: Detected lcore 24 as core 2 on socket 1 00:05:47.357 EAL: Detected lcore 25 as core 3 on socket 1 00:05:47.357 EAL: Detected lcore 26 as core 4 on socket 1 00:05:47.357 EAL: Detected lcore 27 as core 5 on socket 1 00:05:47.357 EAL: Detected lcore 28 as core 8 on socket 1 00:05:47.357 EAL: Detected lcore 29 as core 9 on socket 1 00:05:47.357 EAL: Detected lcore 30 as core 10 on socket 1 00:05:47.357 EAL: Detected lcore 31 as core 11 on socket 1 00:05:47.357 EAL: Detected lcore 32 as core 12 on socket 1 00:05:47.357 EAL: Detected lcore 33 as core 16 on socket 1 00:05:47.357 EAL: Detected lcore 34 as core 17 on socket 1 00:05:47.357 EAL: Detected lcore 35 as core 18 on socket 1 00:05:47.357 EAL: Detected lcore 36 as core 19 on socket 1 00:05:47.357 EAL: Detected lcore 37 as core 20 on socket 1 00:05:47.357 EAL: Detected lcore 38 as core 21 on socket 1 00:05:47.357 EAL: Detected lcore 39 as core 24 on socket 1 00:05:47.357 EAL: Detected lcore 40 as core 25 on socket 1 00:05:47.357 EAL: Detected lcore 41 as core 26 on socket 1 00:05:47.357 EAL: Detected lcore 42 as core 27 on socket 1 00:05:47.357 EAL: Detected lcore 43 as core 28 on socket 1 00:05:47.357 EAL: Detected lcore 44 as core 0 on socket 0 00:05:47.357 EAL: Detected lcore 45 as core 1 on socket 0 00:05:47.357 EAL: Detected lcore 46 as core 2 on socket 0 00:05:47.357 EAL: Detected lcore 47 as core 3 on socket 0 00:05:47.357 EAL: Detected lcore 48 as core 4 on socket 0 00:05:47.357 EAL: Detected lcore 49 as core 5 on socket 0 00:05:47.357 EAL: Detected lcore 50 as core 8 on socket 0 00:05:47.357 EAL: Detected lcore 51 as core 9 on socket 0 00:05:47.357 EAL: Detected lcore 52 as core 10 on socket 0 00:05:47.357 EAL: Detected lcore 53 as core 11 on socket 0 00:05:47.357 EAL: Detected lcore 54 as core 12 on socket 0 00:05:47.357 EAL: Detected lcore 55 as core 16 on socket 0 00:05:47.357 EAL: Detected lcore 56 as core 17 on socket 0 00:05:47.357 EAL: Detected lcore 57 as core 18 on socket 0 00:05:47.357 EAL: Detected lcore 58 as core 19 on socket 0 00:05:47.357 EAL: Detected lcore 59 as core 20 on socket 0 00:05:47.357 EAL: Detected lcore 60 as core 21 on socket 0 00:05:47.357 EAL: Detected lcore 61 as core 24 on socket 0 00:05:47.357 EAL: Detected lcore 62 as core 25 on socket 0 00:05:47.357 EAL: Detected lcore 63 as core 26 on socket 0 00:05:47.357 EAL: Detected lcore 64 as core 27 on socket 0 00:05:47.357 EAL: Detected lcore 65 as core 28 on socket 0 00:05:47.357 EAL: Detected lcore 66 as core 0 on socket 1 00:05:47.357 EAL: Detected lcore 67 as core 1 on socket 1 00:05:47.357 EAL: Detected lcore 68 as core 2 on socket 1 00:05:47.357 EAL: Detected lcore 69 as core 3 on socket 1 00:05:47.357 EAL: Detected lcore 70 as core 4 on socket 1 00:05:47.357 EAL: Detected lcore 71 as core 5 on socket 1 00:05:47.357 EAL: Detected lcore 72 as core 8 on socket 1 00:05:47.357 EAL: Detected lcore 73 as core 9 on socket 1 00:05:47.357 EAL: Detected lcore 74 as core 10 on socket 1 00:05:47.357 EAL: Detected lcore 75 as core 11 on socket 1 00:05:47.357 EAL: Detected lcore 76 as core 12 on socket 1 00:05:47.357 EAL: Detected lcore 77 as core 16 on socket 1 00:05:47.357 EAL: Detected lcore 78 as core 17 on socket 1 00:05:47.357 EAL: Detected lcore 79 as core 18 on socket 1 00:05:47.357 EAL: Detected lcore 80 as core 19 on socket 1 00:05:47.357 EAL: Detected lcore 81 as core 20 on socket 1 00:05:47.357 EAL: Detected lcore 82 as core 21 on socket 1 00:05:47.357 EAL: Detected lcore 83 as core 24 on socket 1 00:05:47.357 EAL: Detected lcore 84 as core 25 on socket 1 00:05:47.357 EAL: Detected lcore 85 as core 26 on socket 1 00:05:47.357 EAL: Detected lcore 86 as core 27 on socket 1 00:05:47.357 EAL: Detected lcore 87 as core 28 on socket 1 00:05:47.616 EAL: Maximum logical cores by configuration: 128 00:05:47.616 EAL: Detected CPU lcores: 88 00:05:47.616 EAL: Detected NUMA nodes: 2 00:05:47.616 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:47.616 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:47.616 EAL: Checking presence of .so 'librte_eal.so' 00:05:47.616 EAL: Detected static linkage of DPDK 00:05:47.616 EAL: No shared files mode enabled, IPC will be disabled 00:05:47.616 EAL: Bus pci wants IOVA as 'DC' 00:05:47.616 EAL: Buses did not request a specific IOVA mode. 00:05:47.616 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:47.616 EAL: Selected IOVA mode 'VA' 00:05:47.616 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.616 EAL: Probing VFIO support... 00:05:47.616 EAL: IOMMU type 1 (Type 1) is supported 00:05:47.616 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:47.616 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:47.616 EAL: VFIO support initialized 00:05:47.616 EAL: Ask a virtual area of 0x2e000 bytes 00:05:47.616 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:47.616 EAL: Setting up physically contiguous memory... 00:05:47.616 EAL: Setting maximum number of open files to 524288 00:05:47.616 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:47.617 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:47.617 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:47.617 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.617 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:47.617 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.617 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.617 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:47.617 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:47.617 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.617 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:47.617 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.617 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.617 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:47.617 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:47.617 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.617 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:47.617 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.617 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.617 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:47.617 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:47.617 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.617 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:47.617 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.617 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.617 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:47.617 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:47.617 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:47.617 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.617 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:47.617 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:47.617 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.617 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:47.617 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:47.617 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.617 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:47.617 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:47.617 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.617 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:47.617 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:47.617 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.617 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:47.617 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:47.617 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.617 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:47.617 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:47.617 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.617 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:47.617 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:47.617 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.617 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:47.617 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:47.617 EAL: Hugepages will be freed exactly as allocated. 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: TSC frequency is ~2100000 KHz 00:05:47.617 EAL: Main lcore 0 is ready (tid=7fbe4ca6ba00;cpuset=[0]) 00:05:47.617 EAL: Trying to obtain current memory policy. 00:05:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.617 EAL: Restoring previous memory policy: 0 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was expanded by 2MB 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Mem event callback 'spdk:(nil)' registered 00:05:47.617 00:05:47.617 00:05:47.617 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.617 http://cunit.sourceforge.net/ 00:05:47.617 00:05:47.617 00:05:47.617 Suite: components_suite 00:05:47.617 Test: vtophys_malloc_test ...passed 00:05:47.617 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.617 EAL: Restoring previous memory policy: 4 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was expanded by 4MB 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was shrunk by 4MB 00:05:47.617 EAL: Trying to obtain current memory policy. 00:05:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.617 EAL: Restoring previous memory policy: 4 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was expanded by 6MB 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was shrunk by 6MB 00:05:47.617 EAL: Trying to obtain current memory policy. 00:05:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.617 EAL: Restoring previous memory policy: 4 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was expanded by 10MB 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was shrunk by 10MB 00:05:47.617 EAL: Trying to obtain current memory policy. 00:05:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.617 EAL: Restoring previous memory policy: 4 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was expanded by 18MB 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was shrunk by 18MB 00:05:47.617 EAL: Trying to obtain current memory policy. 00:05:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.617 EAL: Restoring previous memory policy: 4 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was expanded by 34MB 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was shrunk by 34MB 00:05:47.617 EAL: Trying to obtain current memory policy. 00:05:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.617 EAL: Restoring previous memory policy: 4 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was expanded by 66MB 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was shrunk by 66MB 00:05:47.617 EAL: Trying to obtain current memory policy. 00:05:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.617 EAL: Restoring previous memory policy: 4 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was expanded by 130MB 00:05:47.617 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.617 EAL: request: mp_malloc_sync 00:05:47.617 EAL: No shared files mode enabled, IPC is disabled 00:05:47.617 EAL: Heap on socket 0 was shrunk by 130MB 00:05:47.617 EAL: Trying to obtain current memory policy. 00:05:47.617 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.876 EAL: Restoring previous memory policy: 4 00:05:47.876 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.876 EAL: request: mp_malloc_sync 00:05:47.876 EAL: No shared files mode enabled, IPC is disabled 00:05:47.876 EAL: Heap on socket 0 was expanded by 258MB 00:05:47.877 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.877 EAL: request: mp_malloc_sync 00:05:47.877 EAL: No shared files mode enabled, IPC is disabled 00:05:47.877 EAL: Heap on socket 0 was shrunk by 258MB 00:05:47.877 EAL: Trying to obtain current memory policy. 00:05:47.877 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.877 EAL: Restoring previous memory policy: 4 00:05:47.877 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.877 EAL: request: mp_malloc_sync 00:05:47.877 EAL: No shared files mode enabled, IPC is disabled 00:05:47.877 EAL: Heap on socket 0 was expanded by 514MB 00:05:48.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.135 EAL: request: mp_malloc_sync 00:05:48.135 EAL: No shared files mode enabled, IPC is disabled 00:05:48.135 EAL: Heap on socket 0 was shrunk by 514MB 00:05:48.135 EAL: Trying to obtain current memory policy. 00:05:48.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.394 EAL: Restoring previous memory policy: 4 00:05:48.394 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.394 EAL: request: mp_malloc_sync 00:05:48.394 EAL: No shared files mode enabled, IPC is disabled 00:05:48.394 EAL: Heap on socket 0 was expanded by 1026MB 00:05:48.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.652 EAL: request: mp_malloc_sync 00:05:48.652 EAL: No shared files mode enabled, IPC is disabled 00:05:48.652 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:48.652 passed 00:05:48.652 00:05:48.652 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.652 suites 1 1 n/a 0 0 00:05:48.652 tests 2 2 2 0 0 00:05:48.652 asserts 497 497 497 0 n/a 00:05:48.652 00:05:48.652 Elapsed time = 1.167 seconds 00:05:48.652 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.652 EAL: request: mp_malloc_sync 00:05:48.652 EAL: No shared files mode enabled, IPC is disabled 00:05:48.652 EAL: Heap on socket 0 was shrunk by 2MB 00:05:48.652 EAL: No shared files mode enabled, IPC is disabled 00:05:48.652 EAL: No shared files mode enabled, IPC is disabled 00:05:48.652 EAL: No shared files mode enabled, IPC is disabled 00:05:48.652 00:05:48.652 real 0m1.302s 00:05:48.652 user 0m0.751s 00:05:48.652 sys 0m0.518s 00:05:48.652 13:33:41 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:48.652 13:33:41 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:48.652 ************************************ 00:05:48.652 END TEST env_vtophys 00:05:48.652 ************************************ 00:05:48.912 13:33:41 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:48.912 13:33:41 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:48.912 13:33:41 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:48.912 13:33:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.912 ************************************ 00:05:48.912 START TEST env_pci 00:05:48.912 ************************************ 00:05:48.912 13:33:41 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:48.912 00:05:48.912 00:05:48.912 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.912 http://cunit.sourceforge.net/ 00:05:48.912 00:05:48.912 00:05:48.912 Suite: pci 00:05:48.912 Test: pci_hook ...[2024-06-11 13:33:41.637038] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3416902 has claimed it 00:05:48.912 EAL: Cannot find device (10000:00:01.0) 00:05:48.912 EAL: Failed to attach device on primary process 00:05:48.912 passed 00:05:48.912 00:05:48.912 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.912 suites 1 1 n/a 0 0 00:05:48.912 tests 1 1 1 0 0 00:05:48.912 asserts 25 25 25 0 n/a 00:05:48.912 00:05:48.912 Elapsed time = 0.031 seconds 00:05:48.912 00:05:48.912 real 0m0.048s 00:05:48.912 user 0m0.013s 00:05:48.912 sys 0m0.035s 00:05:48.912 13:33:41 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:48.912 13:33:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:48.912 ************************************ 00:05:48.912 END TEST env_pci 00:05:48.912 ************************************ 00:05:48.912 13:33:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:48.912 13:33:41 env -- env/env.sh@15 -- # uname 00:05:48.912 13:33:41 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:48.912 13:33:41 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:48.912 13:33:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.912 13:33:41 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:05:48.912 13:33:41 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:48.912 13:33:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.912 ************************************ 00:05:48.912 START TEST env_dpdk_post_init 00:05:48.912 ************************************ 00:05:48.912 13:33:41 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.912 EAL: Detected CPU lcores: 88 00:05:48.912 EAL: Detected NUMA nodes: 2 00:05:48.912 EAL: Detected static linkage of DPDK 00:05:48.912 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.912 EAL: Selected IOVA mode 'VA' 00:05:48.912 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.912 EAL: VFIO support initialized 00:05:48.912 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:49.171 EAL: Using IOMMU type 1 (Type 1) 00:05:49.738 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:50.675 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:05:51.243 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:54.530 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:54.530 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001000000 00:05:54.530 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:05:54.530 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001004000 00:05:55.096 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:55.096 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001008000 00:05:55.355 Starting DPDK initialization... 00:05:55.355 Starting SPDK post initialization... 00:05:55.355 SPDK NVMe probe 00:05:55.355 Attaching to 0000:5e:00.0 00:05:55.355 Attaching to 0000:5f:00.0 00:05:55.355 Attaching to 0000:d8:00.0 00:05:55.355 Attached to 0000:5e:00.0 00:05:55.355 Attached to 0000:5f:00.0 00:05:55.355 Attached to 0000:d8:00.0 00:05:55.355 Cleaning up... 00:05:55.355 00:05:55.355 real 0m6.373s 00:05:55.355 user 0m3.989s 00:05:55.355 sys 0m0.173s 00:05:55.355 13:33:48 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:55.355 13:33:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.355 ************************************ 00:05:55.355 END TEST env_dpdk_post_init 00:05:55.355 ************************************ 00:05:55.355 13:33:48 env -- env/env.sh@26 -- # uname 00:05:55.355 13:33:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:55.355 13:33:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:55.355 13:33:48 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:55.355 13:33:48 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:55.355 13:33:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.355 ************************************ 00:05:55.355 START TEST env_mem_callbacks 00:05:55.355 ************************************ 00:05:55.355 13:33:48 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:55.355 EAL: Detected CPU lcores: 88 00:05:55.355 EAL: Detected NUMA nodes: 2 00:05:55.355 EAL: Detected static linkage of DPDK 00:05:55.355 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:55.355 EAL: Selected IOVA mode 'VA' 00:05:55.355 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.355 EAL: VFIO support initialized 00:05:55.355 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:55.355 00:05:55.355 00:05:55.355 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.355 http://cunit.sourceforge.net/ 00:05:55.355 00:05:55.355 00:05:55.355 Suite: memory 00:05:55.355 Test: test ... 00:05:55.355 register 0x200000200000 2097152 00:05:55.355 malloc 3145728 00:05:55.355 register 0x200000400000 4194304 00:05:55.355 buf 0x200000500000 len 3145728 PASSED 00:05:55.355 malloc 64 00:05:55.355 buf 0x2000004fff40 len 64 PASSED 00:05:55.355 malloc 4194304 00:05:55.355 register 0x200000800000 6291456 00:05:55.355 buf 0x200000a00000 len 4194304 PASSED 00:05:55.355 free 0x200000500000 3145728 00:05:55.355 free 0x2000004fff40 64 00:05:55.355 unregister 0x200000400000 4194304 PASSED 00:05:55.355 free 0x200000a00000 4194304 00:05:55.355 unregister 0x200000800000 6291456 PASSED 00:05:55.355 malloc 8388608 00:05:55.355 register 0x200000400000 10485760 00:05:55.355 buf 0x200000600000 len 8388608 PASSED 00:05:55.355 free 0x200000600000 8388608 00:05:55.355 unregister 0x200000400000 10485760 PASSED 00:05:55.355 passed 00:05:55.355 00:05:55.355 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.355 suites 1 1 n/a 0 0 00:05:55.355 tests 1 1 1 0 0 00:05:55.355 asserts 15 15 15 0 n/a 00:05:55.355 00:05:55.355 Elapsed time = 0.007 seconds 00:05:55.355 00:05:55.355 real 0m0.062s 00:05:55.355 user 0m0.020s 00:05:55.355 sys 0m0.041s 00:05:55.355 13:33:48 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:55.355 13:33:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:55.355 ************************************ 00:05:55.355 END TEST env_mem_callbacks 00:05:55.355 ************************************ 00:05:55.669 00:05:55.669 real 0m8.371s 00:05:55.669 user 0m5.077s 00:05:55.669 sys 0m1.078s 00:05:55.669 13:33:48 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:55.670 13:33:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:55.670 ************************************ 00:05:55.670 END TEST env 00:05:55.670 ************************************ 00:05:55.670 13:33:48 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:55.670 13:33:48 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:55.670 13:33:48 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:55.670 13:33:48 -- common/autotest_common.sh@10 -- # set +x 00:05:55.670 ************************************ 00:05:55.670 START TEST rpc 00:05:55.670 ************************************ 00:05:55.670 13:33:48 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:55.670 * Looking for test storage... 00:05:55.670 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:55.670 13:33:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3418086 00:05:55.670 13:33:48 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:55.670 13:33:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.670 13:33:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3418086 00:05:55.670 13:33:48 rpc -- common/autotest_common.sh@830 -- # '[' -z 3418086 ']' 00:05:55.670 13:33:48 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.670 13:33:48 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:55.670 13:33:48 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.670 13:33:48 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:55.670 13:33:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.670 [2024-06-11 13:33:48.451733] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:05:55.670 [2024-06-11 13:33:48.451821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418086 ] 00:05:55.670 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.670 [2024-06-11 13:33:48.530196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.950 [2024-06-11 13:33:48.636187] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:55.950 [2024-06-11 13:33:48.636237] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3418086' to capture a snapshot of events at runtime. 00:05:55.950 [2024-06-11 13:33:48.636249] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:55.950 [2024-06-11 13:33:48.636258] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:55.950 [2024-06-11 13:33:48.636265] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3418086 for offline analysis/debug. 00:05:55.950 [2024-06-11 13:33:48.636284] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.209 13:33:48 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:56.209 13:33:48 rpc -- common/autotest_common.sh@863 -- # return 0 00:05:56.209 13:33:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:56.209 13:33:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:56.209 13:33:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:56.209 13:33:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:56.209 13:33:48 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.209 13:33:48 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.209 13:33:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.209 ************************************ 00:05:56.209 START TEST rpc_integrity 00:05:56.209 ************************************ 00:05:56.209 13:33:48 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:56.209 13:33:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:56.209 13:33:48 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.209 13:33:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.209 13:33:48 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.209 13:33:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:56.209 13:33:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:56.209 13:33:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:56.210 13:33:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:56.210 13:33:48 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.210 13:33:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.210 13:33:48 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.210 13:33:48 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:56.210 13:33:48 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:56.210 13:33:48 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.210 13:33:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.210 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.210 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:56.210 { 00:05:56.210 "name": "Malloc0", 00:05:56.210 "aliases": [ 00:05:56.210 "70316b97-f972-4b19-9a6f-8ecd3f30fa35" 00:05:56.210 ], 00:05:56.210 "product_name": "Malloc disk", 00:05:56.210 "block_size": 512, 00:05:56.210 "num_blocks": 16384, 00:05:56.210 "uuid": "70316b97-f972-4b19-9a6f-8ecd3f30fa35", 00:05:56.210 "assigned_rate_limits": { 00:05:56.210 "rw_ios_per_sec": 0, 00:05:56.210 "rw_mbytes_per_sec": 0, 00:05:56.210 "r_mbytes_per_sec": 0, 00:05:56.210 "w_mbytes_per_sec": 0 00:05:56.210 }, 00:05:56.210 "claimed": false, 00:05:56.210 "zoned": false, 00:05:56.210 "supported_io_types": { 00:05:56.210 "read": true, 00:05:56.210 "write": true, 00:05:56.210 "unmap": true, 00:05:56.210 "write_zeroes": true, 00:05:56.210 "flush": true, 00:05:56.210 "reset": true, 00:05:56.210 "compare": false, 00:05:56.210 "compare_and_write": false, 00:05:56.210 "abort": true, 00:05:56.210 "nvme_admin": false, 00:05:56.210 "nvme_io": false 00:05:56.210 }, 00:05:56.210 "memory_domains": [ 00:05:56.210 { 00:05:56.210 "dma_device_id": "system", 00:05:56.210 "dma_device_type": 1 00:05:56.210 }, 00:05:56.210 { 00:05:56.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.210 "dma_device_type": 2 00:05:56.210 } 00:05:56.210 ], 00:05:56.210 "driver_specific": {} 00:05:56.210 } 00:05:56.210 ]' 00:05:56.210 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:56.210 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:56.210 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:56.210 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.210 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.210 [2024-06-11 13:33:49.058059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:56.210 [2024-06-11 13:33:49.058099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:56.210 [2024-06-11 13:33:49.058119] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4c8fe80 00:05:56.210 [2024-06-11 13:33:49.058129] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:56.210 [2024-06-11 13:33:49.059338] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:56.210 [2024-06-11 13:33:49.059364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:56.210 Passthru0 00:05:56.210 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.210 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:56.210 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.210 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.210 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.210 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:56.210 { 00:05:56.210 "name": "Malloc0", 00:05:56.210 "aliases": [ 00:05:56.210 "70316b97-f972-4b19-9a6f-8ecd3f30fa35" 00:05:56.210 ], 00:05:56.210 "product_name": "Malloc disk", 00:05:56.210 "block_size": 512, 00:05:56.210 "num_blocks": 16384, 00:05:56.210 "uuid": "70316b97-f972-4b19-9a6f-8ecd3f30fa35", 00:05:56.210 "assigned_rate_limits": { 00:05:56.210 "rw_ios_per_sec": 0, 00:05:56.210 "rw_mbytes_per_sec": 0, 00:05:56.210 "r_mbytes_per_sec": 0, 00:05:56.210 "w_mbytes_per_sec": 0 00:05:56.210 }, 00:05:56.210 "claimed": true, 00:05:56.210 "claim_type": "exclusive_write", 00:05:56.210 "zoned": false, 00:05:56.210 "supported_io_types": { 00:05:56.210 "read": true, 00:05:56.210 "write": true, 00:05:56.210 "unmap": true, 00:05:56.210 "write_zeroes": true, 00:05:56.210 "flush": true, 00:05:56.210 "reset": true, 00:05:56.210 "compare": false, 00:05:56.210 "compare_and_write": false, 00:05:56.210 "abort": true, 00:05:56.210 "nvme_admin": false, 00:05:56.210 "nvme_io": false 00:05:56.210 }, 00:05:56.210 "memory_domains": [ 00:05:56.210 { 00:05:56.210 "dma_device_id": "system", 00:05:56.210 "dma_device_type": 1 00:05:56.210 }, 00:05:56.210 { 00:05:56.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.210 "dma_device_type": 2 00:05:56.210 } 00:05:56.210 ], 00:05:56.210 "driver_specific": {} 00:05:56.210 }, 00:05:56.210 { 00:05:56.210 "name": "Passthru0", 00:05:56.210 "aliases": [ 00:05:56.210 "104bc650-b25b-5dc3-bf31-a94507d89bca" 00:05:56.210 ], 00:05:56.210 "product_name": "passthru", 00:05:56.210 "block_size": 512, 00:05:56.210 "num_blocks": 16384, 00:05:56.210 "uuid": "104bc650-b25b-5dc3-bf31-a94507d89bca", 00:05:56.210 "assigned_rate_limits": { 00:05:56.210 "rw_ios_per_sec": 0, 00:05:56.210 "rw_mbytes_per_sec": 0, 00:05:56.210 "r_mbytes_per_sec": 0, 00:05:56.210 "w_mbytes_per_sec": 0 00:05:56.210 }, 00:05:56.210 "claimed": false, 00:05:56.210 "zoned": false, 00:05:56.210 "supported_io_types": { 00:05:56.210 "read": true, 00:05:56.210 "write": true, 00:05:56.210 "unmap": true, 00:05:56.210 "write_zeroes": true, 00:05:56.210 "flush": true, 00:05:56.210 "reset": true, 00:05:56.210 "compare": false, 00:05:56.210 "compare_and_write": false, 00:05:56.210 "abort": true, 00:05:56.210 "nvme_admin": false, 00:05:56.210 "nvme_io": false 00:05:56.210 }, 00:05:56.210 "memory_domains": [ 00:05:56.210 { 00:05:56.210 "dma_device_id": "system", 00:05:56.210 "dma_device_type": 1 00:05:56.210 }, 00:05:56.210 { 00:05:56.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.210 "dma_device_type": 2 00:05:56.210 } 00:05:56.210 ], 00:05:56.210 "driver_specific": { 00:05:56.210 "passthru": { 00:05:56.210 "name": "Passthru0", 00:05:56.210 "base_bdev_name": "Malloc0" 00:05:56.210 } 00:05:56.210 } 00:05:56.210 } 00:05:56.210 ]' 00:05:56.210 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:56.469 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:56.469 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:56.469 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.469 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.469 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.469 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:56.469 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.469 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.469 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.469 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:56.469 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.469 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.469 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.469 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:56.469 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:56.469 13:33:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.469 00:05:56.469 real 0m0.299s 00:05:56.469 user 0m0.189s 00:05:56.469 sys 0m0.040s 00:05:56.469 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.469 13:33:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.469 ************************************ 00:05:56.469 END TEST rpc_integrity 00:05:56.469 ************************************ 00:05:56.469 13:33:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:56.469 13:33:49 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.469 13:33:49 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.469 13:33:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.469 ************************************ 00:05:56.469 START TEST rpc_plugins 00:05:56.469 ************************************ 00:05:56.469 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:05:56.469 13:33:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:56.469 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.469 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.469 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.469 13:33:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:56.469 13:33:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:56.469 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.469 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.469 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.469 13:33:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:56.469 { 00:05:56.470 "name": "Malloc1", 00:05:56.470 "aliases": [ 00:05:56.470 "db825814-5e23-4bd8-aadc-83921bbd1e44" 00:05:56.470 ], 00:05:56.470 "product_name": "Malloc disk", 00:05:56.470 "block_size": 4096, 00:05:56.470 "num_blocks": 256, 00:05:56.470 "uuid": "db825814-5e23-4bd8-aadc-83921bbd1e44", 00:05:56.470 "assigned_rate_limits": { 00:05:56.470 "rw_ios_per_sec": 0, 00:05:56.470 "rw_mbytes_per_sec": 0, 00:05:56.470 "r_mbytes_per_sec": 0, 00:05:56.470 "w_mbytes_per_sec": 0 00:05:56.470 }, 00:05:56.470 "claimed": false, 00:05:56.470 "zoned": false, 00:05:56.470 "supported_io_types": { 00:05:56.470 "read": true, 00:05:56.470 "write": true, 00:05:56.470 "unmap": true, 00:05:56.470 "write_zeroes": true, 00:05:56.470 "flush": true, 00:05:56.470 "reset": true, 00:05:56.470 "compare": false, 00:05:56.470 "compare_and_write": false, 00:05:56.470 "abort": true, 00:05:56.470 "nvme_admin": false, 00:05:56.470 "nvme_io": false 00:05:56.470 }, 00:05:56.470 "memory_domains": [ 00:05:56.470 { 00:05:56.470 "dma_device_id": "system", 00:05:56.470 "dma_device_type": 1 00:05:56.470 }, 00:05:56.470 { 00:05:56.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:56.470 "dma_device_type": 2 00:05:56.470 } 00:05:56.470 ], 00:05:56.470 "driver_specific": {} 00:05:56.470 } 00:05:56.470 ]' 00:05:56.470 13:33:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:56.470 13:33:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:56.470 13:33:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:56.470 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.470 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.470 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.470 13:33:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:56.470 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.470 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.728 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.728 13:33:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:56.728 13:33:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:56.728 13:33:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:56.728 00:05:56.728 real 0m0.150s 00:05:56.728 user 0m0.093s 00:05:56.728 sys 0m0.021s 00:05:56.728 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.728 13:33:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:56.728 ************************************ 00:05:56.728 END TEST rpc_plugins 00:05:56.728 ************************************ 00:05:56.728 13:33:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:56.728 13:33:49 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.728 13:33:49 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.728 13:33:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.728 ************************************ 00:05:56.728 START TEST rpc_trace_cmd_test 00:05:56.728 ************************************ 00:05:56.728 13:33:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:05:56.728 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:56.728 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:56.728 13:33:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.728 13:33:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.728 13:33:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.728 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:56.728 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3418086", 00:05:56.728 "tpoint_group_mask": "0x8", 00:05:56.728 "iscsi_conn": { 00:05:56.728 "mask": "0x2", 00:05:56.728 "tpoint_mask": "0x0" 00:05:56.728 }, 00:05:56.729 "scsi": { 00:05:56.729 "mask": "0x4", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 }, 00:05:56.729 "bdev": { 00:05:56.729 "mask": "0x8", 00:05:56.729 "tpoint_mask": "0xffffffffffffffff" 00:05:56.729 }, 00:05:56.729 "nvmf_rdma": { 00:05:56.729 "mask": "0x10", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 }, 00:05:56.729 "nvmf_tcp": { 00:05:56.729 "mask": "0x20", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 }, 00:05:56.729 "ftl": { 00:05:56.729 "mask": "0x40", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 }, 00:05:56.729 "blobfs": { 00:05:56.729 "mask": "0x80", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 }, 00:05:56.729 "dsa": { 00:05:56.729 "mask": "0x200", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 }, 00:05:56.729 "thread": { 00:05:56.729 "mask": "0x400", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 }, 00:05:56.729 "nvme_pcie": { 00:05:56.729 "mask": "0x800", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 }, 00:05:56.729 "iaa": { 00:05:56.729 "mask": "0x1000", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 }, 00:05:56.729 "nvme_tcp": { 00:05:56.729 "mask": "0x2000", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 }, 00:05:56.729 "bdev_nvme": { 00:05:56.729 "mask": "0x4000", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 }, 00:05:56.729 "sock": { 00:05:56.729 "mask": "0x8000", 00:05:56.729 "tpoint_mask": "0x0" 00:05:56.729 } 00:05:56.729 }' 00:05:56.729 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:56.729 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:56.729 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:56.729 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:56.729 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:56.987 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:56.987 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:56.987 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:56.988 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:56.988 13:33:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:56.988 00:05:56.988 real 0m0.259s 00:05:56.988 user 0m0.221s 00:05:56.988 sys 0m0.028s 00:05:56.988 13:33:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.988 13:33:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:56.988 ************************************ 00:05:56.988 END TEST rpc_trace_cmd_test 00:05:56.988 ************************************ 00:05:56.988 13:33:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:56.988 13:33:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:56.988 13:33:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:56.988 13:33:49 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:56.988 13:33:49 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.988 13:33:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.988 ************************************ 00:05:56.988 START TEST rpc_daemon_integrity 00:05:56.988 ************************************ 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:56.988 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:57.247 { 00:05:57.247 "name": "Malloc2", 00:05:57.247 "aliases": [ 00:05:57.247 "fd76a1a2-5e51-4b0c-80c9-f1a08b70f721" 00:05:57.247 ], 00:05:57.247 "product_name": "Malloc disk", 00:05:57.247 "block_size": 512, 00:05:57.247 "num_blocks": 16384, 00:05:57.247 "uuid": "fd76a1a2-5e51-4b0c-80c9-f1a08b70f721", 00:05:57.247 "assigned_rate_limits": { 00:05:57.247 "rw_ios_per_sec": 0, 00:05:57.247 "rw_mbytes_per_sec": 0, 00:05:57.247 "r_mbytes_per_sec": 0, 00:05:57.247 "w_mbytes_per_sec": 0 00:05:57.247 }, 00:05:57.247 "claimed": false, 00:05:57.247 "zoned": false, 00:05:57.247 "supported_io_types": { 00:05:57.247 "read": true, 00:05:57.247 "write": true, 00:05:57.247 "unmap": true, 00:05:57.247 "write_zeroes": true, 00:05:57.247 "flush": true, 00:05:57.247 "reset": true, 00:05:57.247 "compare": false, 00:05:57.247 "compare_and_write": false, 00:05:57.247 "abort": true, 00:05:57.247 "nvme_admin": false, 00:05:57.247 "nvme_io": false 00:05:57.247 }, 00:05:57.247 "memory_domains": [ 00:05:57.247 { 00:05:57.247 "dma_device_id": "system", 00:05:57.247 "dma_device_type": 1 00:05:57.247 }, 00:05:57.247 { 00:05:57.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.247 "dma_device_type": 2 00:05:57.247 } 00:05:57.247 ], 00:05:57.247 "driver_specific": {} 00:05:57.247 } 00:05:57.247 ]' 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.247 [2024-06-11 13:33:49.968500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:57.247 [2024-06-11 13:33:49.968540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:57.247 [2024-06-11 13:33:49.968562] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4af45f0 00:05:57.247 [2024-06-11 13:33:49.968571] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:57.247 [2024-06-11 13:33:49.969773] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:57.247 [2024-06-11 13:33:49.969800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:57.247 Passthru0 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:57.247 { 00:05:57.247 "name": "Malloc2", 00:05:57.247 "aliases": [ 00:05:57.247 "fd76a1a2-5e51-4b0c-80c9-f1a08b70f721" 00:05:57.247 ], 00:05:57.247 "product_name": "Malloc disk", 00:05:57.247 "block_size": 512, 00:05:57.247 "num_blocks": 16384, 00:05:57.247 "uuid": "fd76a1a2-5e51-4b0c-80c9-f1a08b70f721", 00:05:57.247 "assigned_rate_limits": { 00:05:57.247 "rw_ios_per_sec": 0, 00:05:57.247 "rw_mbytes_per_sec": 0, 00:05:57.247 "r_mbytes_per_sec": 0, 00:05:57.247 "w_mbytes_per_sec": 0 00:05:57.247 }, 00:05:57.247 "claimed": true, 00:05:57.247 "claim_type": "exclusive_write", 00:05:57.247 "zoned": false, 00:05:57.247 "supported_io_types": { 00:05:57.247 "read": true, 00:05:57.247 "write": true, 00:05:57.247 "unmap": true, 00:05:57.247 "write_zeroes": true, 00:05:57.247 "flush": true, 00:05:57.247 "reset": true, 00:05:57.247 "compare": false, 00:05:57.247 "compare_and_write": false, 00:05:57.247 "abort": true, 00:05:57.247 "nvme_admin": false, 00:05:57.247 "nvme_io": false 00:05:57.247 }, 00:05:57.247 "memory_domains": [ 00:05:57.247 { 00:05:57.247 "dma_device_id": "system", 00:05:57.247 "dma_device_type": 1 00:05:57.247 }, 00:05:57.247 { 00:05:57.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.247 "dma_device_type": 2 00:05:57.247 } 00:05:57.247 ], 00:05:57.247 "driver_specific": {} 00:05:57.247 }, 00:05:57.247 { 00:05:57.247 "name": "Passthru0", 00:05:57.247 "aliases": [ 00:05:57.247 "667c91c9-5683-5a04-8c28-2318d140426b" 00:05:57.247 ], 00:05:57.247 "product_name": "passthru", 00:05:57.247 "block_size": 512, 00:05:57.247 "num_blocks": 16384, 00:05:57.247 "uuid": "667c91c9-5683-5a04-8c28-2318d140426b", 00:05:57.247 "assigned_rate_limits": { 00:05:57.247 "rw_ios_per_sec": 0, 00:05:57.247 "rw_mbytes_per_sec": 0, 00:05:57.247 "r_mbytes_per_sec": 0, 00:05:57.247 "w_mbytes_per_sec": 0 00:05:57.247 }, 00:05:57.247 "claimed": false, 00:05:57.247 "zoned": false, 00:05:57.247 "supported_io_types": { 00:05:57.247 "read": true, 00:05:57.247 "write": true, 00:05:57.247 "unmap": true, 00:05:57.247 "write_zeroes": true, 00:05:57.247 "flush": true, 00:05:57.247 "reset": true, 00:05:57.247 "compare": false, 00:05:57.247 "compare_and_write": false, 00:05:57.247 "abort": true, 00:05:57.247 "nvme_admin": false, 00:05:57.247 "nvme_io": false 00:05:57.247 }, 00:05:57.247 "memory_domains": [ 00:05:57.247 { 00:05:57.247 "dma_device_id": "system", 00:05:57.247 "dma_device_type": 1 00:05:57.247 }, 00:05:57.247 { 00:05:57.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:57.247 "dma_device_type": 2 00:05:57.247 } 00:05:57.247 ], 00:05:57.247 "driver_specific": { 00:05:57.247 "passthru": { 00:05:57.247 "name": "Passthru0", 00:05:57.247 "base_bdev_name": "Malloc2" 00:05:57.247 } 00:05:57.247 } 00:05:57.247 } 00:05:57.247 ]' 00:05:57.247 13:33:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:57.247 13:33:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:57.247 13:33:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:57.247 13:33:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.247 13:33:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.247 13:33:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.247 13:33:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:57.247 13:33:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.248 13:33:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.248 13:33:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.248 13:33:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:57.248 13:33:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:57.248 13:33:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.248 13:33:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:57.248 13:33:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:57.248 13:33:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:57.248 13:33:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:57.248 00:05:57.248 real 0m0.291s 00:05:57.248 user 0m0.195s 00:05:57.248 sys 0m0.037s 00:05:57.248 13:33:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:57.248 13:33:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:57.248 ************************************ 00:05:57.248 END TEST rpc_daemon_integrity 00:05:57.248 ************************************ 00:05:57.248 13:33:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:57.248 13:33:50 rpc -- rpc/rpc.sh@84 -- # killprocess 3418086 00:05:57.248 13:33:50 rpc -- common/autotest_common.sh@949 -- # '[' -z 3418086 ']' 00:05:57.248 13:33:50 rpc -- common/autotest_common.sh@953 -- # kill -0 3418086 00:05:57.248 13:33:50 rpc -- common/autotest_common.sh@954 -- # uname 00:05:57.248 13:33:50 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:57.248 13:33:50 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3418086 00:05:57.507 13:33:50 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:57.507 13:33:50 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:57.507 13:33:50 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3418086' 00:05:57.507 killing process with pid 3418086 00:05:57.507 13:33:50 rpc -- common/autotest_common.sh@968 -- # kill 3418086 00:05:57.507 13:33:50 rpc -- common/autotest_common.sh@973 -- # wait 3418086 00:05:57.765 00:05:57.765 real 0m2.223s 00:05:57.765 user 0m2.915s 00:05:57.765 sys 0m0.699s 00:05:57.765 13:33:50 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:57.765 13:33:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.765 ************************************ 00:05:57.765 END TEST rpc 00:05:57.765 ************************************ 00:05:57.765 13:33:50 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:57.766 13:33:50 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:57.766 13:33:50 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:57.766 13:33:50 -- common/autotest_common.sh@10 -- # set +x 00:05:57.766 ************************************ 00:05:57.766 START TEST skip_rpc 00:05:57.766 ************************************ 00:05:57.766 13:33:50 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:58.024 * Looking for test storage... 00:05:58.024 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:58.024 13:33:50 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:05:58.024 13:33:50 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:05:58.024 13:33:50 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:58.024 13:33:50 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:58.024 13:33:50 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:58.024 13:33:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.024 ************************************ 00:05:58.024 START TEST skip_rpc 00:05:58.024 ************************************ 00:05:58.024 13:33:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:05:58.024 13:33:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3418678 00:05:58.024 13:33:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.024 13:33:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:58.024 13:33:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:58.024 [2024-06-11 13:33:50.761735] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:05:58.024 [2024-06-11 13:33:50.761800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418678 ] 00:05:58.024 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.024 [2024-06-11 13:33:50.836589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.024 [2024-06-11 13:33:50.935112] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3418678 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 3418678 ']' 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 3418678 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3418678 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3418678' 00:06:03.292 killing process with pid 3418678 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 3418678 00:06:03.292 13:33:55 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 3418678 00:06:03.292 00:06:03.292 real 0m5.422s 00:06:03.292 user 0m5.134s 00:06:03.292 sys 0m0.306s 00:06:03.292 13:33:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.292 13:33:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.293 ************************************ 00:06:03.293 END TEST skip_rpc 00:06:03.293 ************************************ 00:06:03.293 13:33:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:03.293 13:33:56 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:03.293 13:33:56 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.293 13:33:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.552 ************************************ 00:06:03.552 START TEST skip_rpc_with_json 00:06:03.552 ************************************ 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3419562 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3419562 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 3419562 ']' 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:03.552 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.552 [2024-06-11 13:33:56.244342] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:03.552 [2024-06-11 13:33:56.244408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419562 ] 00:06:03.552 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.552 [2024-06-11 13:33:56.311431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.552 [2024-06-11 13:33:56.413717] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.811 [2024-06-11 13:33:56.650895] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:03.811 request: 00:06:03.811 { 00:06:03.811 "trtype": "tcp", 00:06:03.811 "method": "nvmf_get_transports", 00:06:03.811 "req_id": 1 00:06:03.811 } 00:06:03.811 Got JSON-RPC error response 00:06:03.811 response: 00:06:03.811 { 00:06:03.811 "code": -19, 00:06:03.811 "message": "No such device" 00:06:03.811 } 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.811 [2024-06-11 13:33:56.663015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:03.811 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.071 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:04.071 13:33:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:04.071 { 00:06:04.071 "subsystems": [ 00:06:04.071 { 00:06:04.071 "subsystem": "scheduler", 00:06:04.071 "config": [ 00:06:04.071 { 00:06:04.071 "method": "framework_set_scheduler", 00:06:04.071 "params": { 00:06:04.071 "name": "static" 00:06:04.071 } 00:06:04.071 } 00:06:04.071 ] 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "subsystem": "vmd", 00:06:04.071 "config": [] 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "subsystem": "sock", 00:06:04.071 "config": [ 00:06:04.071 { 00:06:04.071 "method": "sock_set_default_impl", 00:06:04.071 "params": { 00:06:04.071 "impl_name": "posix" 00:06:04.071 } 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "method": "sock_impl_set_options", 00:06:04.071 "params": { 00:06:04.071 "impl_name": "ssl", 00:06:04.071 "recv_buf_size": 4096, 00:06:04.071 "send_buf_size": 4096, 00:06:04.071 "enable_recv_pipe": true, 00:06:04.071 "enable_quickack": false, 00:06:04.071 "enable_placement_id": 0, 00:06:04.071 "enable_zerocopy_send_server": true, 00:06:04.071 "enable_zerocopy_send_client": false, 00:06:04.071 "zerocopy_threshold": 0, 00:06:04.071 "tls_version": 0, 00:06:04.071 "enable_ktls": false 00:06:04.071 } 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "method": "sock_impl_set_options", 00:06:04.071 "params": { 00:06:04.071 "impl_name": "posix", 00:06:04.071 "recv_buf_size": 2097152, 00:06:04.071 "send_buf_size": 2097152, 00:06:04.071 "enable_recv_pipe": true, 00:06:04.071 "enable_quickack": false, 00:06:04.071 "enable_placement_id": 0, 00:06:04.071 "enable_zerocopy_send_server": true, 00:06:04.071 "enable_zerocopy_send_client": false, 00:06:04.071 "zerocopy_threshold": 0, 00:06:04.071 "tls_version": 0, 00:06:04.071 "enable_ktls": false 00:06:04.071 } 00:06:04.071 } 00:06:04.071 ] 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "subsystem": "iobuf", 00:06:04.071 "config": [ 00:06:04.071 { 00:06:04.071 "method": "iobuf_set_options", 00:06:04.071 "params": { 00:06:04.071 "small_pool_count": 8192, 00:06:04.071 "large_pool_count": 1024, 00:06:04.071 "small_bufsize": 8192, 00:06:04.071 "large_bufsize": 135168 00:06:04.071 } 00:06:04.071 } 00:06:04.071 ] 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "subsystem": "keyring", 00:06:04.071 "config": [] 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "subsystem": "vfio_user_target", 00:06:04.071 "config": null 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "subsystem": "accel", 00:06:04.071 "config": [ 00:06:04.071 { 00:06:04.071 "method": "accel_set_options", 00:06:04.071 "params": { 00:06:04.071 "small_cache_size": 128, 00:06:04.071 "large_cache_size": 16, 00:06:04.071 "task_count": 2048, 00:06:04.071 "sequence_count": 2048, 00:06:04.071 "buf_count": 2048 00:06:04.071 } 00:06:04.071 } 00:06:04.071 ] 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "subsystem": "bdev", 00:06:04.071 "config": [ 00:06:04.071 { 00:06:04.071 "method": "bdev_set_options", 00:06:04.071 "params": { 00:06:04.071 "bdev_io_pool_size": 65535, 00:06:04.071 "bdev_io_cache_size": 256, 00:06:04.071 "bdev_auto_examine": true, 00:06:04.071 "iobuf_small_cache_size": 128, 00:06:04.071 "iobuf_large_cache_size": 16 00:06:04.071 } 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "method": "bdev_raid_set_options", 00:06:04.071 "params": { 00:06:04.071 "process_window_size_kb": 1024 00:06:04.071 } 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "method": "bdev_nvme_set_options", 00:06:04.071 "params": { 00:06:04.071 "action_on_timeout": "none", 00:06:04.071 "timeout_us": 0, 00:06:04.071 "timeout_admin_us": 0, 00:06:04.071 "keep_alive_timeout_ms": 10000, 00:06:04.071 "arbitration_burst": 0, 00:06:04.071 "low_priority_weight": 0, 00:06:04.071 "medium_priority_weight": 0, 00:06:04.071 "high_priority_weight": 0, 00:06:04.071 "nvme_adminq_poll_period_us": 10000, 00:06:04.071 "nvme_ioq_poll_period_us": 0, 00:06:04.071 "io_queue_requests": 0, 00:06:04.071 "delay_cmd_submit": true, 00:06:04.071 "transport_retry_count": 4, 00:06:04.071 "bdev_retry_count": 3, 00:06:04.071 "transport_ack_timeout": 0, 00:06:04.071 "ctrlr_loss_timeout_sec": 0, 00:06:04.071 "reconnect_delay_sec": 0, 00:06:04.071 "fast_io_fail_timeout_sec": 0, 00:06:04.071 "disable_auto_failback": false, 00:06:04.071 "generate_uuids": false, 00:06:04.071 "transport_tos": 0, 00:06:04.071 "nvme_error_stat": false, 00:06:04.071 "rdma_srq_size": 0, 00:06:04.071 "io_path_stat": false, 00:06:04.071 "allow_accel_sequence": false, 00:06:04.071 "rdma_max_cq_size": 0, 00:06:04.071 "rdma_cm_event_timeout_ms": 0, 00:06:04.071 "dhchap_digests": [ 00:06:04.071 "sha256", 00:06:04.071 "sha384", 00:06:04.071 "sha512" 00:06:04.071 ], 00:06:04.071 "dhchap_dhgroups": [ 00:06:04.071 "null", 00:06:04.071 "ffdhe2048", 00:06:04.071 "ffdhe3072", 00:06:04.071 "ffdhe4096", 00:06:04.071 "ffdhe6144", 00:06:04.071 "ffdhe8192" 00:06:04.071 ] 00:06:04.071 } 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "method": "bdev_nvme_set_hotplug", 00:06:04.071 "params": { 00:06:04.071 "period_us": 100000, 00:06:04.071 "enable": false 00:06:04.071 } 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "method": "bdev_iscsi_set_options", 00:06:04.071 "params": { 00:06:04.071 "timeout_sec": 30 00:06:04.071 } 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "method": "bdev_wait_for_examine" 00:06:04.071 } 00:06:04.071 ] 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "subsystem": "nvmf", 00:06:04.071 "config": [ 00:06:04.071 { 00:06:04.071 "method": "nvmf_set_config", 00:06:04.071 "params": { 00:06:04.071 "discovery_filter": "match_any", 00:06:04.071 "admin_cmd_passthru": { 00:06:04.071 "identify_ctrlr": false 00:06:04.071 } 00:06:04.071 } 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "method": "nvmf_set_max_subsystems", 00:06:04.071 "params": { 00:06:04.071 "max_subsystems": 1024 00:06:04.071 } 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "method": "nvmf_set_crdt", 00:06:04.071 "params": { 00:06:04.071 "crdt1": 0, 00:06:04.071 "crdt2": 0, 00:06:04.071 "crdt3": 0 00:06:04.071 } 00:06:04.071 }, 00:06:04.071 { 00:06:04.071 "method": "nvmf_create_transport", 00:06:04.071 "params": { 00:06:04.071 "trtype": "TCP", 00:06:04.071 "max_queue_depth": 128, 00:06:04.071 "max_io_qpairs_per_ctrlr": 127, 00:06:04.071 "in_capsule_data_size": 4096, 00:06:04.071 "max_io_size": 131072, 00:06:04.071 "io_unit_size": 131072, 00:06:04.071 "max_aq_depth": 128, 00:06:04.071 "num_shared_buffers": 511, 00:06:04.071 "buf_cache_size": 4294967295, 00:06:04.071 "dif_insert_or_strip": false, 00:06:04.071 "zcopy": false, 00:06:04.071 "c2h_success": true, 00:06:04.071 "sock_priority": 0, 00:06:04.071 "abort_timeout_sec": 1, 00:06:04.071 "ack_timeout": 0, 00:06:04.071 "data_wr_pool_size": 0 00:06:04.071 } 00:06:04.071 } 00:06:04.071 ] 00:06:04.072 }, 00:06:04.072 { 00:06:04.072 "subsystem": "nbd", 00:06:04.072 "config": [] 00:06:04.072 }, 00:06:04.072 { 00:06:04.072 "subsystem": "ublk", 00:06:04.072 "config": [] 00:06:04.072 }, 00:06:04.072 { 00:06:04.072 "subsystem": "vhost_blk", 00:06:04.072 "config": [] 00:06:04.072 }, 00:06:04.072 { 00:06:04.072 "subsystem": "scsi", 00:06:04.072 "config": null 00:06:04.072 }, 00:06:04.072 { 00:06:04.072 "subsystem": "iscsi", 00:06:04.072 "config": [ 00:06:04.072 { 00:06:04.072 "method": "iscsi_set_options", 00:06:04.072 "params": { 00:06:04.072 "node_base": "iqn.2016-06.io.spdk", 00:06:04.072 "max_sessions": 128, 00:06:04.072 "max_connections_per_session": 2, 00:06:04.072 "max_queue_depth": 64, 00:06:04.072 "default_time2wait": 2, 00:06:04.072 "default_time2retain": 20, 00:06:04.072 "first_burst_length": 8192, 00:06:04.072 "immediate_data": true, 00:06:04.072 "allow_duplicated_isid": false, 00:06:04.072 "error_recovery_level": 0, 00:06:04.072 "nop_timeout": 60, 00:06:04.072 "nop_in_interval": 30, 00:06:04.072 "disable_chap": false, 00:06:04.072 "require_chap": false, 00:06:04.072 "mutual_chap": false, 00:06:04.072 "chap_group": 0, 00:06:04.072 "max_large_datain_per_connection": 64, 00:06:04.072 "max_r2t_per_connection": 4, 00:06:04.072 "pdu_pool_size": 36864, 00:06:04.072 "immediate_data_pool_size": 16384, 00:06:04.072 "data_out_pool_size": 2048 00:06:04.072 } 00:06:04.072 } 00:06:04.072 ] 00:06:04.072 }, 00:06:04.072 { 00:06:04.072 "subsystem": "vhost_scsi", 00:06:04.072 "config": [] 00:06:04.072 } 00:06:04.072 ] 00:06:04.072 } 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3419562 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 3419562 ']' 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 3419562 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3419562 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3419562' 00:06:04.072 killing process with pid 3419562 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 3419562 00:06:04.072 13:33:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 3419562 00:06:04.331 13:33:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3419780 00:06:04.331 13:33:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:04.331 13:33:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:09.606 13:34:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3419780 00:06:09.606 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 3419780 ']' 00:06:09.606 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 3419780 00:06:09.606 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:06:09.606 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:09.606 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3419780 00:06:09.606 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:09.606 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:09.606 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3419780' 00:06:09.606 killing process with pid 3419780 00:06:09.606 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 3419780 00:06:09.606 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 3419780 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:06:09.866 00:06:09.866 real 0m6.431s 00:06:09.866 user 0m6.119s 00:06:09.866 sys 0m0.641s 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.866 ************************************ 00:06:09.866 END TEST skip_rpc_with_json 00:06:09.866 ************************************ 00:06:09.866 13:34:02 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:09.866 13:34:02 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:09.866 13:34:02 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.866 13:34:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.866 ************************************ 00:06:09.866 START TEST skip_rpc_with_delay 00:06:09.866 ************************************ 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:09.866 [2024-06-11 13:34:02.743312] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:09.866 [2024-06-11 13:34:02.743405] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:09.866 00:06:09.866 real 0m0.040s 00:06:09.866 user 0m0.020s 00:06:09.866 sys 0m0.020s 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.866 13:34:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:09.866 ************************************ 00:06:09.866 END TEST skip_rpc_with_delay 00:06:09.866 ************************************ 00:06:10.125 13:34:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:10.125 13:34:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:10.125 13:34:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:10.125 13:34:02 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:10.125 13:34:02 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:10.125 13:34:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.125 ************************************ 00:06:10.125 START TEST exit_on_failed_rpc_init 00:06:10.125 ************************************ 00:06:10.125 13:34:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:06:10.125 13:34:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3420687 00:06:10.125 13:34:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3420687 00:06:10.125 13:34:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.125 13:34:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 3420687 ']' 00:06:10.125 13:34:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.125 13:34:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:10.125 13:34:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.125 13:34:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:10.125 13:34:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.125 [2024-06-11 13:34:02.842282] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:10.125 [2024-06-11 13:34:02.842347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420687 ] 00:06:10.125 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.125 [2024-06-11 13:34:02.909044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.125 [2024-06-11 13:34:03.009353] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.384 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:10.385 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:10.385 [2024-06-11 13:34:03.257325] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:10.385 [2024-06-11 13:34:03.257374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420703 ] 00:06:10.385 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.644 [2024-06-11 13:34:03.315372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.644 [2024-06-11 13:34:03.412089] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.644 [2024-06-11 13:34:03.412183] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:10.644 [2024-06-11 13:34:03.412203] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:10.644 [2024-06-11 13:34:03.412213] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3420687 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 3420687 ']' 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 3420687 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3420687 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3420687' 00:06:10.644 killing process with pid 3420687 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 3420687 00:06:10.644 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 3420687 00:06:11.213 00:06:11.213 real 0m1.092s 00:06:11.213 user 0m1.231s 00:06:11.213 sys 0m0.400s 00:06:11.213 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.213 13:34:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:11.213 ************************************ 00:06:11.213 END TEST exit_on_failed_rpc_init 00:06:11.213 ************************************ 00:06:11.213 13:34:03 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:06:11.213 00:06:11.213 real 0m13.319s 00:06:11.213 user 0m12.621s 00:06:11.213 sys 0m1.606s 00:06:11.213 13:34:03 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.213 13:34:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.213 ************************************ 00:06:11.213 END TEST skip_rpc 00:06:11.213 ************************************ 00:06:11.213 13:34:03 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:11.213 13:34:03 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:11.213 13:34:03 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:11.213 13:34:03 -- common/autotest_common.sh@10 -- # set +x 00:06:11.213 ************************************ 00:06:11.213 START TEST rpc_client 00:06:11.213 ************************************ 00:06:11.213 13:34:03 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:11.213 * Looking for test storage... 00:06:11.213 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:06:11.213 13:34:04 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:11.213 OK 00:06:11.213 13:34:04 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:11.213 00:06:11.213 real 0m0.105s 00:06:11.213 user 0m0.048s 00:06:11.213 sys 0m0.063s 00:06:11.213 13:34:04 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.213 13:34:04 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:11.213 ************************************ 00:06:11.213 END TEST rpc_client 00:06:11.213 ************************************ 00:06:11.473 13:34:04 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:11.473 13:34:04 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:11.473 13:34:04 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:11.473 13:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:11.473 ************************************ 00:06:11.473 START TEST json_config 00:06:11.473 ************************************ 00:06:11.473 13:34:04 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:11.473 13:34:04 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8089bee2-271d-eb11-906e-0017a4403562 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8089bee2-271d-eb11-906e-0017a4403562 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:11.473 13:34:04 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.473 13:34:04 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.473 13:34:04 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.473 13:34:04 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.473 13:34:04 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.473 13:34:04 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.473 13:34:04 json_config -- paths/export.sh@5 -- # export PATH 00:06:11.473 13:34:04 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@47 -- # : 0 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.473 13:34:04 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.474 13:34:04 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.474 13:34:04 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:11.474 13:34:04 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:11.474 13:34:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:11.474 13:34:04 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:11.474 13:34:04 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:11.474 13:34:04 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:11.474 WARNING: No tests are enabled so not running JSON configuration tests 00:06:11.474 13:34:04 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:11.474 00:06:11.474 real 0m0.092s 00:06:11.474 user 0m0.052s 00:06:11.474 sys 0m0.041s 00:06:11.474 13:34:04 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.474 13:34:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:11.474 ************************************ 00:06:11.474 END TEST json_config 00:06:11.474 ************************************ 00:06:11.474 13:34:04 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:11.474 13:34:04 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:11.474 13:34:04 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:11.474 13:34:04 -- common/autotest_common.sh@10 -- # set +x 00:06:11.474 ************************************ 00:06:11.474 START TEST json_config_extra_key 00:06:11.474 ************************************ 00:06:11.474 13:34:04 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8089bee2-271d-eb11-906e-0017a4403562 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8089bee2-271d-eb11-906e-0017a4403562 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:11.734 13:34:04 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.734 13:34:04 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.734 13:34:04 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.734 13:34:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.734 13:34:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.734 13:34:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.734 13:34:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:11.734 13:34:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:11.734 13:34:04 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:11.734 INFO: launching applications... 00:06:11.734 13:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:11.734 13:34:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:11.734 13:34:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:11.734 13:34:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:11.734 13:34:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:11.734 13:34:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:11.734 13:34:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.734 13:34:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:11.734 13:34:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3421057 00:06:11.734 13:34:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:11.734 Waiting for target to run... 00:06:11.734 13:34:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3421057 /var/tmp/spdk_tgt.sock 00:06:11.734 13:34:04 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:11.734 13:34:04 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 3421057 ']' 00:06:11.734 13:34:04 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.734 13:34:04 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:11.734 13:34:04 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.734 13:34:04 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:11.734 13:34:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.734 [2024-06-11 13:34:04.443680] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:11.734 [2024-06-11 13:34:04.443753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421057 ] 00:06:11.734 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.304 [2024-06-11 13:34:04.934369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.304 [2024-06-11 13:34:05.036827] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.563 13:34:05 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:12.563 13:34:05 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:06:12.563 13:34:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:12.563 00:06:12.563 13:34:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:12.563 INFO: shutting down applications... 00:06:12.563 13:34:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:12.563 13:34:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:12.563 13:34:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:12.563 13:34:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3421057 ]] 00:06:12.563 13:34:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3421057 00:06:12.563 13:34:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:12.563 13:34:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.563 13:34:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3421057 00:06:12.563 13:34:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:13.132 13:34:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:13.132 13:34:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:13.132 13:34:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3421057 00:06:13.132 13:34:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:13.132 13:34:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:13.132 13:34:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:13.132 13:34:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:13.132 SPDK target shutdown done 00:06:13.132 13:34:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:13.132 Success 00:06:13.132 00:06:13.132 real 0m1.529s 00:06:13.132 user 0m1.232s 00:06:13.132 sys 0m0.583s 00:06:13.132 13:34:05 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.132 13:34:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.132 ************************************ 00:06:13.132 END TEST json_config_extra_key 00:06:13.132 ************************************ 00:06:13.132 13:34:05 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.132 13:34:05 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:13.132 13:34:05 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:13.132 13:34:05 -- common/autotest_common.sh@10 -- # set +x 00:06:13.132 ************************************ 00:06:13.132 START TEST alias_rpc 00:06:13.132 ************************************ 00:06:13.132 13:34:05 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:13.132 * Looking for test storage... 00:06:13.132 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:13.132 13:34:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.132 13:34:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3421318 00:06:13.132 13:34:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3421318 00:06:13.132 13:34:06 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 3421318 ']' 00:06:13.132 13:34:06 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.132 13:34:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.132 13:34:06 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:13.132 13:34:06 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.132 13:34:06 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:13.132 13:34:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.391 [2024-06-11 13:34:06.044653] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:13.391 [2024-06-11 13:34:06.044726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421318 ] 00:06:13.391 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.391 [2024-06-11 13:34:06.125109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.391 [2024-06-11 13:34:06.225335] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.651 13:34:06 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:13.651 13:34:06 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:13.651 13:34:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:13.910 13:34:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3421318 00:06:13.910 13:34:06 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 3421318 ']' 00:06:13.910 13:34:06 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 3421318 00:06:13.910 13:34:06 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:06:13.910 13:34:06 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:13.910 13:34:06 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3421318 00:06:13.910 13:34:06 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:13.910 13:34:06 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:13.910 13:34:06 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3421318' 00:06:13.910 killing process with pid 3421318 00:06:13.910 13:34:06 alias_rpc -- common/autotest_common.sh@968 -- # kill 3421318 00:06:13.910 13:34:06 alias_rpc -- common/autotest_common.sh@973 -- # wait 3421318 00:06:14.479 00:06:14.479 real 0m1.238s 00:06:14.479 user 0m1.348s 00:06:14.479 sys 0m0.431s 00:06:14.479 13:34:07 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:14.479 13:34:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.479 ************************************ 00:06:14.479 END TEST alias_rpc 00:06:14.479 ************************************ 00:06:14.479 13:34:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:14.479 13:34:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:14.479 13:34:07 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:14.479 13:34:07 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:14.479 13:34:07 -- common/autotest_common.sh@10 -- # set +x 00:06:14.479 ************************************ 00:06:14.479 START TEST spdkcli_tcp 00:06:14.479 ************************************ 00:06:14.479 13:34:07 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:14.479 * Looking for test storage... 00:06:14.479 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:14.479 13:34:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:14.479 13:34:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:14.479 13:34:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:14.479 13:34:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:14.479 13:34:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:14.479 13:34:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:14.479 13:34:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:14.479 13:34:07 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:14.479 13:34:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.479 13:34:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3421587 00:06:14.479 13:34:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3421587 00:06:14.479 13:34:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:14.479 13:34:07 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 3421587 ']' 00:06:14.479 13:34:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.479 13:34:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:14.479 13:34:07 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.479 13:34:07 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:14.479 13:34:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.479 [2024-06-11 13:34:07.348517] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:14.479 [2024-06-11 13:34:07.348590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421587 ] 00:06:14.479 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.738 [2024-06-11 13:34:07.418894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.738 [2024-06-11 13:34:07.519658] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.738 [2024-06-11 13:34:07.519665] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.675 13:34:08 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:15.675 13:34:08 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:06:15.675 13:34:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3421802 00:06:15.675 13:34:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:15.675 13:34:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.675 [ 00:06:15.675 "spdk_get_version", 00:06:15.675 "rpc_get_methods", 00:06:15.675 "trace_get_info", 00:06:15.675 "trace_get_tpoint_group_mask", 00:06:15.675 "trace_disable_tpoint_group", 00:06:15.675 "trace_enable_tpoint_group", 00:06:15.675 "trace_clear_tpoint_mask", 00:06:15.675 "trace_set_tpoint_mask", 00:06:15.675 "vfu_tgt_set_base_path", 00:06:15.675 "framework_get_pci_devices", 00:06:15.675 "framework_get_config", 00:06:15.675 "framework_get_subsystems", 00:06:15.675 "keyring_get_keys", 00:06:15.675 "iobuf_get_stats", 00:06:15.675 "iobuf_set_options", 00:06:15.675 "sock_get_default_impl", 00:06:15.675 "sock_set_default_impl", 00:06:15.675 "sock_impl_set_options", 00:06:15.675 "sock_impl_get_options", 00:06:15.675 "vmd_rescan", 00:06:15.675 "vmd_remove_device", 00:06:15.675 "vmd_enable", 00:06:15.675 "accel_get_stats", 00:06:15.675 "accel_set_options", 00:06:15.675 "accel_set_driver", 00:06:15.675 "accel_crypto_key_destroy", 00:06:15.675 "accel_crypto_keys_get", 00:06:15.675 "accel_crypto_key_create", 00:06:15.675 "accel_assign_opc", 00:06:15.675 "accel_get_module_info", 00:06:15.675 "accel_get_opc_assignments", 00:06:15.675 "notify_get_notifications", 00:06:15.675 "notify_get_types", 00:06:15.675 "bdev_get_histogram", 00:06:15.675 "bdev_enable_histogram", 00:06:15.675 "bdev_set_qos_limit", 00:06:15.675 "bdev_set_qd_sampling_period", 00:06:15.675 "bdev_get_bdevs", 00:06:15.675 "bdev_reset_iostat", 00:06:15.675 "bdev_get_iostat", 00:06:15.675 "bdev_examine", 00:06:15.675 "bdev_wait_for_examine", 00:06:15.675 "bdev_set_options", 00:06:15.675 "scsi_get_devices", 00:06:15.675 "thread_set_cpumask", 00:06:15.675 "framework_get_scheduler", 00:06:15.675 "framework_set_scheduler", 00:06:15.675 "framework_get_reactors", 00:06:15.675 "thread_get_io_channels", 00:06:15.675 "thread_get_pollers", 00:06:15.675 "thread_get_stats", 00:06:15.675 "framework_monitor_context_switch", 00:06:15.675 "spdk_kill_instance", 00:06:15.675 "log_enable_timestamps", 00:06:15.675 "log_get_flags", 00:06:15.675 "log_clear_flag", 00:06:15.675 "log_set_flag", 00:06:15.675 "log_get_level", 00:06:15.675 "log_set_level", 00:06:15.675 "log_get_print_level", 00:06:15.675 "log_set_print_level", 00:06:15.675 "framework_enable_cpumask_locks", 00:06:15.675 "framework_disable_cpumask_locks", 00:06:15.675 "framework_wait_init", 00:06:15.675 "framework_start_init", 00:06:15.675 "virtio_blk_create_transport", 00:06:15.675 "virtio_blk_get_transports", 00:06:15.675 "vhost_controller_set_coalescing", 00:06:15.675 "vhost_get_controllers", 00:06:15.675 "vhost_delete_controller", 00:06:15.675 "vhost_create_blk_controller", 00:06:15.675 "vhost_scsi_controller_remove_target", 00:06:15.675 "vhost_scsi_controller_add_target", 00:06:15.675 "vhost_start_scsi_controller", 00:06:15.675 "vhost_create_scsi_controller", 00:06:15.675 "ublk_recover_disk", 00:06:15.675 "ublk_get_disks", 00:06:15.675 "ublk_stop_disk", 00:06:15.675 "ublk_start_disk", 00:06:15.675 "ublk_destroy_target", 00:06:15.675 "ublk_create_target", 00:06:15.675 "nbd_get_disks", 00:06:15.675 "nbd_stop_disk", 00:06:15.675 "nbd_start_disk", 00:06:15.675 "env_dpdk_get_mem_stats", 00:06:15.676 "nvmf_stop_mdns_prr", 00:06:15.676 "nvmf_publish_mdns_prr", 00:06:15.676 "nvmf_subsystem_get_listeners", 00:06:15.676 "nvmf_subsystem_get_qpairs", 00:06:15.676 "nvmf_subsystem_get_controllers", 00:06:15.676 "nvmf_get_stats", 00:06:15.676 "nvmf_get_transports", 00:06:15.676 "nvmf_create_transport", 00:06:15.676 "nvmf_get_targets", 00:06:15.676 "nvmf_delete_target", 00:06:15.676 "nvmf_create_target", 00:06:15.676 "nvmf_subsystem_allow_any_host", 00:06:15.676 "nvmf_subsystem_remove_host", 00:06:15.676 "nvmf_subsystem_add_host", 00:06:15.676 "nvmf_ns_remove_host", 00:06:15.676 "nvmf_ns_add_host", 00:06:15.676 "nvmf_subsystem_remove_ns", 00:06:15.676 "nvmf_subsystem_add_ns", 00:06:15.676 "nvmf_subsystem_listener_set_ana_state", 00:06:15.676 "nvmf_discovery_get_referrals", 00:06:15.676 "nvmf_discovery_remove_referral", 00:06:15.676 "nvmf_discovery_add_referral", 00:06:15.676 "nvmf_subsystem_remove_listener", 00:06:15.676 "nvmf_subsystem_add_listener", 00:06:15.676 "nvmf_delete_subsystem", 00:06:15.676 "nvmf_create_subsystem", 00:06:15.676 "nvmf_get_subsystems", 00:06:15.676 "nvmf_set_crdt", 00:06:15.676 "nvmf_set_config", 00:06:15.676 "nvmf_set_max_subsystems", 00:06:15.676 "iscsi_get_histogram", 00:06:15.676 "iscsi_enable_histogram", 00:06:15.676 "iscsi_set_options", 00:06:15.676 "iscsi_get_auth_groups", 00:06:15.676 "iscsi_auth_group_remove_secret", 00:06:15.676 "iscsi_auth_group_add_secret", 00:06:15.676 "iscsi_delete_auth_group", 00:06:15.676 "iscsi_create_auth_group", 00:06:15.676 "iscsi_set_discovery_auth", 00:06:15.676 "iscsi_get_options", 00:06:15.676 "iscsi_target_node_request_logout", 00:06:15.676 "iscsi_target_node_set_redirect", 00:06:15.676 "iscsi_target_node_set_auth", 00:06:15.676 "iscsi_target_node_add_lun", 00:06:15.676 "iscsi_get_stats", 00:06:15.676 "iscsi_get_connections", 00:06:15.676 "iscsi_portal_group_set_auth", 00:06:15.676 "iscsi_start_portal_group", 00:06:15.676 "iscsi_delete_portal_group", 00:06:15.676 "iscsi_create_portal_group", 00:06:15.676 "iscsi_get_portal_groups", 00:06:15.676 "iscsi_delete_target_node", 00:06:15.676 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.676 "iscsi_target_node_add_pg_ig_maps", 00:06:15.676 "iscsi_create_target_node", 00:06:15.676 "iscsi_get_target_nodes", 00:06:15.676 "iscsi_delete_initiator_group", 00:06:15.676 "iscsi_initiator_group_remove_initiators", 00:06:15.676 "iscsi_initiator_group_add_initiators", 00:06:15.676 "iscsi_create_initiator_group", 00:06:15.676 "iscsi_get_initiator_groups", 00:06:15.676 "keyring_linux_set_options", 00:06:15.676 "keyring_file_remove_key", 00:06:15.676 "keyring_file_add_key", 00:06:15.676 "vfu_virtio_create_scsi_endpoint", 00:06:15.676 "vfu_virtio_scsi_remove_target", 00:06:15.676 "vfu_virtio_scsi_add_target", 00:06:15.676 "vfu_virtio_create_blk_endpoint", 00:06:15.676 "vfu_virtio_delete_endpoint", 00:06:15.676 "iaa_scan_accel_module", 00:06:15.676 "dsa_scan_accel_module", 00:06:15.676 "ioat_scan_accel_module", 00:06:15.676 "accel_error_inject_error", 00:06:15.676 "bdev_iscsi_delete", 00:06:15.676 "bdev_iscsi_create", 00:06:15.676 "bdev_iscsi_set_options", 00:06:15.676 "bdev_virtio_attach_controller", 00:06:15.676 "bdev_virtio_scsi_get_devices", 00:06:15.676 "bdev_virtio_detach_controller", 00:06:15.676 "bdev_virtio_blk_set_hotplug", 00:06:15.676 "bdev_ftl_set_property", 00:06:15.676 "bdev_ftl_get_properties", 00:06:15.676 "bdev_ftl_get_stats", 00:06:15.676 "bdev_ftl_unmap", 00:06:15.676 "bdev_ftl_unload", 00:06:15.676 "bdev_ftl_delete", 00:06:15.676 "bdev_ftl_load", 00:06:15.676 "bdev_ftl_create", 00:06:15.676 "bdev_aio_delete", 00:06:15.676 "bdev_aio_rescan", 00:06:15.676 "bdev_aio_create", 00:06:15.676 "blobfs_create", 00:06:15.676 "blobfs_detect", 00:06:15.676 "blobfs_set_cache_size", 00:06:15.676 "bdev_zone_block_delete", 00:06:15.676 "bdev_zone_block_create", 00:06:15.676 "bdev_delay_delete", 00:06:15.676 "bdev_delay_create", 00:06:15.676 "bdev_delay_update_latency", 00:06:15.676 "bdev_split_delete", 00:06:15.676 "bdev_split_create", 00:06:15.676 "bdev_error_inject_error", 00:06:15.676 "bdev_error_delete", 00:06:15.676 "bdev_error_create", 00:06:15.676 "bdev_raid_set_options", 00:06:15.676 "bdev_raid_remove_base_bdev", 00:06:15.676 "bdev_raid_add_base_bdev", 00:06:15.676 "bdev_raid_delete", 00:06:15.676 "bdev_raid_create", 00:06:15.676 "bdev_raid_get_bdevs", 00:06:15.676 "bdev_lvol_set_parent_bdev", 00:06:15.676 "bdev_lvol_set_parent", 00:06:15.676 "bdev_lvol_check_shallow_copy", 00:06:15.676 "bdev_lvol_start_shallow_copy", 00:06:15.676 "bdev_lvol_grow_lvstore", 00:06:15.676 "bdev_lvol_get_lvols", 00:06:15.676 "bdev_lvol_get_lvstores", 00:06:15.676 "bdev_lvol_delete", 00:06:15.676 "bdev_lvol_set_read_only", 00:06:15.676 "bdev_lvol_resize", 00:06:15.676 "bdev_lvol_decouple_parent", 00:06:15.676 "bdev_lvol_inflate", 00:06:15.676 "bdev_lvol_rename", 00:06:15.676 "bdev_lvol_clone_bdev", 00:06:15.676 "bdev_lvol_clone", 00:06:15.676 "bdev_lvol_snapshot", 00:06:15.676 "bdev_lvol_create", 00:06:15.676 "bdev_lvol_delete_lvstore", 00:06:15.676 "bdev_lvol_rename_lvstore", 00:06:15.676 "bdev_lvol_create_lvstore", 00:06:15.676 "bdev_passthru_delete", 00:06:15.676 "bdev_passthru_create", 00:06:15.676 "bdev_nvme_cuse_unregister", 00:06:15.676 "bdev_nvme_cuse_register", 00:06:15.676 "bdev_opal_new_user", 00:06:15.676 "bdev_opal_set_lock_state", 00:06:15.676 "bdev_opal_delete", 00:06:15.676 "bdev_opal_get_info", 00:06:15.676 "bdev_opal_create", 00:06:15.676 "bdev_nvme_opal_revert", 00:06:15.676 "bdev_nvme_opal_init", 00:06:15.676 "bdev_nvme_send_cmd", 00:06:15.676 "bdev_nvme_get_path_iostat", 00:06:15.676 "bdev_nvme_get_mdns_discovery_info", 00:06:15.676 "bdev_nvme_stop_mdns_discovery", 00:06:15.676 "bdev_nvme_start_mdns_discovery", 00:06:15.676 "bdev_nvme_set_multipath_policy", 00:06:15.676 "bdev_nvme_set_preferred_path", 00:06:15.676 "bdev_nvme_get_io_paths", 00:06:15.676 "bdev_nvme_remove_error_injection", 00:06:15.676 "bdev_nvme_add_error_injection", 00:06:15.676 "bdev_nvme_get_discovery_info", 00:06:15.676 "bdev_nvme_stop_discovery", 00:06:15.676 "bdev_nvme_start_discovery", 00:06:15.676 "bdev_nvme_get_controller_health_info", 00:06:15.676 "bdev_nvme_disable_controller", 00:06:15.676 "bdev_nvme_enable_controller", 00:06:15.676 "bdev_nvme_reset_controller", 00:06:15.676 "bdev_nvme_get_transport_statistics", 00:06:15.676 "bdev_nvme_apply_firmware", 00:06:15.676 "bdev_nvme_detach_controller", 00:06:15.676 "bdev_nvme_get_controllers", 00:06:15.676 "bdev_nvme_attach_controller", 00:06:15.676 "bdev_nvme_set_hotplug", 00:06:15.676 "bdev_nvme_set_options", 00:06:15.676 "bdev_null_resize", 00:06:15.676 "bdev_null_delete", 00:06:15.676 "bdev_null_create", 00:06:15.676 "bdev_malloc_delete", 00:06:15.676 "bdev_malloc_create" 00:06:15.676 ] 00:06:15.676 13:34:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:15.676 13:34:08 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:15.676 13:34:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.676 13:34:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:15.676 13:34:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3421587 00:06:15.676 13:34:08 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 3421587 ']' 00:06:15.676 13:34:08 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 3421587 00:06:15.676 13:34:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:06:15.676 13:34:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:15.676 13:34:08 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3421587 00:06:15.935 13:34:08 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:15.935 13:34:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:15.935 13:34:08 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3421587' 00:06:15.935 killing process with pid 3421587 00:06:15.935 13:34:08 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 3421587 00:06:15.935 13:34:08 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 3421587 00:06:16.196 00:06:16.196 real 0m1.719s 00:06:16.196 user 0m3.328s 00:06:16.196 sys 0m0.467s 00:06:16.196 13:34:08 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:16.196 13:34:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.196 ************************************ 00:06:16.196 END TEST spdkcli_tcp 00:06:16.196 ************************************ 00:06:16.196 13:34:08 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:16.196 13:34:08 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:16.196 13:34:08 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:16.196 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:06:16.196 ************************************ 00:06:16.196 START TEST dpdk_mem_utility 00:06:16.196 ************************************ 00:06:16.196 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:16.196 * Looking for test storage... 00:06:16.196 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:16.196 13:34:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:16.196 13:34:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3422044 00:06:16.196 13:34:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3422044 00:06:16.196 13:34:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:16.196 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 3422044 ']' 00:06:16.196 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.196 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:16.196 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.196 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:16.196 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.455 [2024-06-11 13:34:09.123767] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:16.455 [2024-06-11 13:34:09.123837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422044 ] 00:06:16.455 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.455 [2024-06-11 13:34:09.200921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.455 [2024-06-11 13:34:09.304813] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.714 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:16.714 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:06:16.714 13:34:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:16.714 13:34:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:16.714 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:16.714 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.714 { 00:06:16.714 "filename": "/tmp/spdk_mem_dump.txt" 00:06:16.714 } 00:06:16.714 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:16.714 13:34:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:16.714 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:16.714 1 heaps totaling size 814.000000 MiB 00:06:16.714 size: 814.000000 MiB heap id: 0 00:06:16.714 end heaps---------- 00:06:16.714 8 mempools totaling size 598.116089 MiB 00:06:16.714 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:16.714 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:16.714 size: 84.521057 MiB name: bdev_io_3422044 00:06:16.714 size: 51.011292 MiB name: evtpool_3422044 00:06:16.714 size: 50.003479 MiB name: msgpool_3422044 00:06:16.714 size: 21.763794 MiB name: PDU_Pool 00:06:16.714 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:16.714 size: 0.026123 MiB name: Session_Pool 00:06:16.714 end mempools------- 00:06:16.714 6 memzones totaling size 4.142822 MiB 00:06:16.714 size: 1.000366 MiB name: RG_ring_0_3422044 00:06:16.714 size: 1.000366 MiB name: RG_ring_1_3422044 00:06:16.714 size: 1.000366 MiB name: RG_ring_4_3422044 00:06:16.714 size: 1.000366 MiB name: RG_ring_5_3422044 00:06:16.714 size: 0.125366 MiB name: RG_ring_2_3422044 00:06:16.714 size: 0.015991 MiB name: RG_ring_3_3422044 00:06:16.714 end memzones------- 00:06:16.714 13:34:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:16.974 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:16.974 list of free elements. size: 12.519348 MiB 00:06:16.974 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:16.974 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:16.974 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:16.974 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:16.974 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:16.974 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:16.974 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:16.974 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:16.974 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:16.974 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:16.974 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:16.974 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:16.974 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:16.974 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:16.974 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:16.974 list of standard malloc elements. size: 199.218079 MiB 00:06:16.974 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:16.974 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:16.974 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:16.974 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:16.974 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:16.974 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:16.974 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:16.974 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:16.974 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:16.974 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:16.974 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:16.974 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:16.974 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:16.974 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:16.974 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:16.974 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:16.974 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:16.974 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:16.974 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:16.974 list of memzone associated elements. size: 602.262573 MiB 00:06:16.974 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:16.974 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:16.974 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:16.974 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:16.974 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:16.974 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3422044_0 00:06:16.974 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:16.974 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3422044_0 00:06:16.974 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:16.974 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3422044_0 00:06:16.974 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:16.974 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:16.974 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:16.974 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:16.974 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:16.974 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3422044 00:06:16.974 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:16.974 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3422044 00:06:16.974 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:16.974 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3422044 00:06:16.974 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:16.974 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:16.974 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:16.974 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:16.974 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:16.974 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:16.974 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:16.974 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:16.974 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:16.974 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3422044 00:06:16.974 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:16.974 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3422044 00:06:16.974 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:16.974 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3422044 00:06:16.974 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:16.974 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3422044 00:06:16.974 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:16.974 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3422044 00:06:16.974 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:16.974 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:16.974 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:16.974 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:16.974 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:16.974 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:16.974 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:16.974 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3422044 00:06:16.974 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:16.974 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:16.974 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:16.974 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:16.974 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:16.974 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3422044 00:06:16.974 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:16.974 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:16.974 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:16.974 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3422044 00:06:16.974 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:16.974 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3422044 00:06:16.974 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:16.974 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:16.974 13:34:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:16.974 13:34:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3422044 00:06:16.974 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 3422044 ']' 00:06:16.974 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 3422044 00:06:16.974 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:06:16.974 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:16.974 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3422044 00:06:16.974 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:16.975 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:16.975 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3422044' 00:06:16.975 killing process with pid 3422044 00:06:16.975 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 3422044 00:06:16.975 13:34:09 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 3422044 00:06:17.233 00:06:17.233 real 0m1.073s 00:06:17.233 user 0m1.078s 00:06:17.233 sys 0m0.419s 00:06:17.233 13:34:10 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:17.233 13:34:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.233 ************************************ 00:06:17.233 END TEST dpdk_mem_utility 00:06:17.233 ************************************ 00:06:17.233 13:34:10 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:17.233 13:34:10 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:17.233 13:34:10 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.233 13:34:10 -- common/autotest_common.sh@10 -- # set +x 00:06:17.492 ************************************ 00:06:17.492 START TEST event 00:06:17.492 ************************************ 00:06:17.492 13:34:10 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:17.492 * Looking for test storage... 00:06:17.492 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:17.492 13:34:10 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:17.492 13:34:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.492 13:34:10 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.492 13:34:10 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:17.492 13:34:10 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:17.492 13:34:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.492 ************************************ 00:06:17.492 START TEST event_perf 00:06:17.492 ************************************ 00:06:17.492 13:34:10 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.492 Running I/O for 1 seconds...[2024-06-11 13:34:10.293397] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:17.492 [2024-06-11 13:34:10.293488] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422199 ] 00:06:17.492 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.492 [2024-06-11 13:34:10.376673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.751 [2024-06-11 13:34:10.481273] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.751 [2024-06-11 13:34:10.481367] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.751 [2024-06-11 13:34:10.481477] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.751 [2024-06-11 13:34:10.481479] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.688 Running I/O for 1 seconds... 00:06:18.688 lcore 0: 165583 00:06:18.688 lcore 1: 165580 00:06:18.688 lcore 2: 165580 00:06:18.688 lcore 3: 165582 00:06:18.688 done. 00:06:18.688 00:06:18.688 real 0m1.291s 00:06:18.688 user 0m4.175s 00:06:18.688 sys 0m0.111s 00:06:18.688 13:34:11 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:18.688 13:34:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.688 ************************************ 00:06:18.688 END TEST event_perf 00:06:18.688 ************************************ 00:06:18.947 13:34:11 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:18.947 13:34:11 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:18.947 13:34:11 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:18.947 13:34:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.947 ************************************ 00:06:18.947 START TEST event_reactor 00:06:18.947 ************************************ 00:06:18.947 13:34:11 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:18.947 [2024-06-11 13:34:11.655462] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:18.947 [2024-06-11 13:34:11.655527] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422439 ] 00:06:18.947 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.947 [2024-06-11 13:34:11.733294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.947 [2024-06-11 13:34:11.831152] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.324 test_start 00:06:20.324 oneshot 00:06:20.324 tick 100 00:06:20.324 tick 100 00:06:20.324 tick 250 00:06:20.324 tick 100 00:06:20.324 tick 100 00:06:20.324 tick 100 00:06:20.324 tick 250 00:06:20.324 tick 500 00:06:20.324 tick 100 00:06:20.324 tick 100 00:06:20.324 tick 250 00:06:20.324 tick 100 00:06:20.324 tick 100 00:06:20.324 test_end 00:06:20.324 00:06:20.324 real 0m1.277s 00:06:20.324 user 0m1.179s 00:06:20.324 sys 0m0.092s 00:06:20.324 13:34:12 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:20.324 13:34:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:20.324 ************************************ 00:06:20.324 END TEST event_reactor 00:06:20.324 ************************************ 00:06:20.324 13:34:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.324 13:34:12 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:20.324 13:34:12 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:20.324 13:34:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.324 ************************************ 00:06:20.324 START TEST event_reactor_perf 00:06:20.324 ************************************ 00:06:20.324 13:34:12 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.324 [2024-06-11 13:34:13.000397] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:20.325 [2024-06-11 13:34:13.000464] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422657 ] 00:06:20.325 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.325 [2024-06-11 13:34:13.076729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.325 [2024-06-11 13:34:13.174397] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.701 test_start 00:06:21.701 test_end 00:06:21.701 Performance: 544178 events per second 00:06:21.701 00:06:21.701 real 0m1.275s 00:06:21.701 user 0m1.170s 00:06:21.701 sys 0m0.099s 00:06:21.701 13:34:14 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:21.701 13:34:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.701 ************************************ 00:06:21.701 END TEST event_reactor_perf 00:06:21.701 ************************************ 00:06:21.701 13:34:14 event -- event/event.sh@49 -- # uname -s 00:06:21.701 13:34:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.701 13:34:14 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:21.701 13:34:14 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:21.701 13:34:14 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:21.701 13:34:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.701 ************************************ 00:06:21.701 START TEST event_scheduler 00:06:21.701 ************************************ 00:06:21.701 13:34:14 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:21.701 * Looking for test storage... 00:06:21.701 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:21.701 13:34:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.701 13:34:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3422953 00:06:21.701 13:34:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.701 13:34:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.701 13:34:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3422953 00:06:21.701 13:34:14 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 3422953 ']' 00:06:21.701 13:34:14 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.701 13:34:14 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:21.701 13:34:14 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.701 13:34:14 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:21.701 13:34:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.701 [2024-06-11 13:34:14.427498] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:21.701 [2024-06-11 13:34:14.427570] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422953 ] 00:06:21.701 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.701 [2024-06-11 13:34:14.485724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.701 [2024-06-11 13:34:14.576733] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.701 [2024-06-11 13:34:14.576751] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.701 [2024-06-11 13:34:14.576784] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.701 [2024-06-11 13:34:14.576785] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.961 13:34:14 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:21.961 13:34:14 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:06:21.961 13:34:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:21.961 13:34:14 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:21.961 13:34:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.961 POWER: Env isn't set yet! 00:06:21.961 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:21.961 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:21.961 POWER: Cannot set governor of lcore 0 to userspace 00:06:21.961 POWER: Attempting to initialise PSTAT power management... 00:06:21.961 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:21.961 POWER: Initialized successfully for lcore 0 power management 00:06:21.961 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:21.961 POWER: Initialized successfully for lcore 1 power management 00:06:21.961 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:21.961 POWER: Initialized successfully for lcore 2 power management 00:06:21.961 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:21.961 POWER: Initialized successfully for lcore 3 power management 00:06:21.961 [2024-06-11 13:34:14.701346] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:21.961 [2024-06-11 13:34:14.701359] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:21.961 [2024-06-11 13:34:14.701367] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:21.961 13:34:14 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:21.961 13:34:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:21.961 13:34:14 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:21.961 13:34:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.961 [2024-06-11 13:34:14.774870] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.961 13:34:14 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:21.961 13:34:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.961 13:34:14 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:21.961 13:34:14 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:21.961 13:34:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.961 ************************************ 00:06:21.961 START TEST scheduler_create_thread 00:06:21.961 ************************************ 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.961 2 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.961 3 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.961 4 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.961 5 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:21.961 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.220 6 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.220 7 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.220 8 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.220 9 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.220 10 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.220 13:34:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.479 13:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:22.479 13:34:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:22.479 13:34:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:22.479 13:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.479 13:34:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.414 13:34:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:23.414 13:34:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:23.414 13:34:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:23.414 13:34:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.350 13:34:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:24.350 13:34:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:24.350 13:34:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:24.350 13:34:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:24.350 13:34:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.287 13:34:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:25.287 00:06:25.287 real 0m3.231s 00:06:25.287 user 0m0.024s 00:06:25.287 sys 0m0.004s 00:06:25.287 13:34:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.287 13:34:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.287 ************************************ 00:06:25.287 END TEST scheduler_create_thread 00:06:25.287 ************************************ 00:06:25.287 13:34:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:25.288 13:34:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3422953 00:06:25.288 13:34:18 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 3422953 ']' 00:06:25.288 13:34:18 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 3422953 00:06:25.288 13:34:18 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:06:25.288 13:34:18 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:25.288 13:34:18 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3422953 00:06:25.288 13:34:18 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:25.288 13:34:18 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:25.288 13:34:18 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3422953' 00:06:25.288 killing process with pid 3422953 00:06:25.288 13:34:18 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 3422953 00:06:25.288 13:34:18 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 3422953 00:06:25.546 [2024-06-11 13:34:18.424289] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:25.805 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:25.805 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:25.805 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:25.805 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:25.805 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:25.805 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:25.805 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:25.805 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:25.805 00:06:25.805 real 0m4.368s 00:06:25.805 user 0m7.761s 00:06:25.805 sys 0m0.355s 00:06:25.805 13:34:18 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:25.805 13:34:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:25.805 ************************************ 00:06:25.805 END TEST event_scheduler 00:06:25.805 ************************************ 00:06:26.064 13:34:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:26.064 13:34:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:26.064 13:34:18 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:26.064 13:34:18 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:26.064 13:34:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.064 ************************************ 00:06:26.064 START TEST app_repeat 00:06:26.064 ************************************ 00:06:26.064 13:34:18 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3423771 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3423771' 00:06:26.064 Process app_repeat pid: 3423771 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:26.064 spdk_app_start Round 0 00:06:26.064 13:34:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3423771 /var/tmp/spdk-nbd.sock 00:06:26.064 13:34:18 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3423771 ']' 00:06:26.064 13:34:18 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.064 13:34:18 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:26.064 13:34:18 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.064 13:34:18 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:26.064 13:34:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.064 [2024-06-11 13:34:18.793229] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:26.064 [2024-06-11 13:34:18.793302] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423771 ] 00:06:26.064 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.064 [2024-06-11 13:34:18.875586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.326 [2024-06-11 13:34:18.978423] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.326 [2024-06-11 13:34:18.978429] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.326 13:34:19 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:26.326 13:34:19 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:26.326 13:34:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.326 Malloc0 00:06:26.679 13:34:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.679 Malloc1 00:06:26.679 13:34:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.679 13:34:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.679 /dev/nbd0 00:06:26.938 13:34:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.938 13:34:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.938 1+0 records in 00:06:26.938 1+0 records out 00:06:26.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225076 s, 18.2 MB/s 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:26.938 13:34:19 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:26.938 13:34:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.938 13:34:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.938 13:34:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:27.196 /dev/nbd1 00:06:27.197 13:34:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:27.197 13:34:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.197 1+0 records in 00:06:27.197 1+0 records out 00:06:27.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242956 s, 16.9 MB/s 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:27.197 13:34:19 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:27.197 13:34:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.197 13:34:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.197 13:34:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.197 13:34:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.197 13:34:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.456 13:34:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:27.456 { 00:06:27.456 "nbd_device": "/dev/nbd0", 00:06:27.456 "bdev_name": "Malloc0" 00:06:27.456 }, 00:06:27.456 { 00:06:27.456 "nbd_device": "/dev/nbd1", 00:06:27.456 "bdev_name": "Malloc1" 00:06:27.456 } 00:06:27.456 ]' 00:06:27.456 13:34:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:27.456 { 00:06:27.456 "nbd_device": "/dev/nbd0", 00:06:27.456 "bdev_name": "Malloc0" 00:06:27.456 }, 00:06:27.456 { 00:06:27.456 "nbd_device": "/dev/nbd1", 00:06:27.456 "bdev_name": "Malloc1" 00:06:27.456 } 00:06:27.456 ]' 00:06:27.456 13:34:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.456 13:34:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:27.456 /dev/nbd1' 00:06:27.456 13:34:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:27.456 /dev/nbd1' 00:06:27.456 13:34:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.456 13:34:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:27.457 256+0 records in 00:06:27.457 256+0 records out 00:06:27.457 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104283 s, 101 MB/s 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:27.457 256+0 records in 00:06:27.457 256+0 records out 00:06:27.457 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220228 s, 47.6 MB/s 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:27.457 256+0 records in 00:06:27.457 256+0 records out 00:06:27.457 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233637 s, 44.9 MB/s 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.457 13:34:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:27.716 13:34:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:27.716 13:34:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:27.716 13:34:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:27.716 13:34:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.716 13:34:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.716 13:34:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:27.716 13:34:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.716 13:34:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.716 13:34:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:27.716 13:34:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.974 13:34:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.975 13:34:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.975 13:34:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.975 13:34:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.975 13:34:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.975 13:34:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.975 13:34:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.975 13:34:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.975 13:34:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.975 13:34:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.975 13:34:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.233 13:34:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.233 13:34:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.233 13:34:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.233 13:34:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.233 13:34:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.233 13:34:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.233 13:34:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.233 13:34:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.233 13:34:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.233 13:34:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.233 13:34:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.233 13:34:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.233 13:34:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:28.492 13:34:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.749 [2024-06-11 13:34:21.536929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.749 [2024-06-11 13:34:21.630281] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.749 [2024-06-11 13:34:21.630286] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.007 [2024-06-11 13:34:21.678687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.007 [2024-06-11 13:34:21.678734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.537 13:34:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.537 13:34:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:31.537 spdk_app_start Round 1 00:06:31.537 13:34:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3423771 /var/tmp/spdk-nbd.sock 00:06:31.537 13:34:24 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3423771 ']' 00:06:31.537 13:34:24 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.537 13:34:24 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:31.537 13:34:24 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.537 13:34:24 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:31.537 13:34:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.795 13:34:24 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:31.795 13:34:24 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:31.795 13:34:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.795 Malloc0 00:06:31.795 13:34:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.054 Malloc1 00:06:32.054 13:34:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.054 13:34:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.312 /dev/nbd0 00:06:32.313 13:34:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.313 13:34:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.313 1+0 records in 00:06:32.313 1+0 records out 00:06:32.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247971 s, 16.5 MB/s 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:32.313 13:34:25 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:32.313 13:34:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.313 13:34:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.313 13:34:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.571 /dev/nbd1 00:06:32.571 13:34:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.571 13:34:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.571 1+0 records in 00:06:32.571 1+0 records out 00:06:32.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228294 s, 17.9 MB/s 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:32.571 13:34:25 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:32.571 13:34:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.571 13:34:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.571 13:34:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.571 13:34:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.571 13:34:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.830 { 00:06:32.830 "nbd_device": "/dev/nbd0", 00:06:32.830 "bdev_name": "Malloc0" 00:06:32.830 }, 00:06:32.830 { 00:06:32.830 "nbd_device": "/dev/nbd1", 00:06:32.830 "bdev_name": "Malloc1" 00:06:32.830 } 00:06:32.830 ]' 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.830 { 00:06:32.830 "nbd_device": "/dev/nbd0", 00:06:32.830 "bdev_name": "Malloc0" 00:06:32.830 }, 00:06:32.830 { 00:06:32.830 "nbd_device": "/dev/nbd1", 00:06:32.830 "bdev_name": "Malloc1" 00:06:32.830 } 00:06:32.830 ]' 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.830 /dev/nbd1' 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.830 /dev/nbd1' 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.830 256+0 records in 00:06:32.830 256+0 records out 00:06:32.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00983324 s, 107 MB/s 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.830 256+0 records in 00:06:32.830 256+0 records out 00:06:32.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217151 s, 48.3 MB/s 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.830 256+0 records in 00:06:32.830 256+0 records out 00:06:32.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233719 s, 44.9 MB/s 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.830 13:34:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.831 13:34:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.831 13:34:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.089 13:34:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.089 13:34:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.089 13:34:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.089 13:34:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.089 13:34:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.089 13:34:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.089 13:34:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.089 13:34:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.089 13:34:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.089 13:34:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.347 13:34:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.347 13:34:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.347 13:34:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.347 13:34:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.347 13:34:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.347 13:34:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.347 13:34:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.347 13:34:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.347 13:34:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.347 13:34:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.347 13:34:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.605 13:34:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.605 13:34:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.605 13:34:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.864 13:34:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.864 13:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.864 13:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.864 13:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.864 13:34:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.864 13:34:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.864 13:34:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.864 13:34:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.864 13:34:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.864 13:34:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.123 13:34:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.123 [2024-06-11 13:34:27.029445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.381 [2024-06-11 13:34:27.123508] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.381 [2024-06-11 13:34:27.123513] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.381 [2024-06-11 13:34:27.173825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.381 [2024-06-11 13:34:27.173874] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.913 13:34:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.913 13:34:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:36.913 spdk_app_start Round 2 00:06:36.913 13:34:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3423771 /var/tmp/spdk-nbd.sock 00:06:36.913 13:34:29 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3423771 ']' 00:06:36.913 13:34:29 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.913 13:34:29 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:36.913 13:34:29 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.913 13:34:29 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:36.913 13:34:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.172 13:34:29 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:37.172 13:34:29 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:37.172 13:34:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.431 Malloc0 00:06:37.431 13:34:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.690 Malloc1 00:06:37.690 13:34:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.690 13:34:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.690 /dev/nbd0 00:06:37.949 13:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.949 13:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.949 13:34:30 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:06:37.949 13:34:30 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:37.949 13:34:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.950 1+0 records in 00:06:37.950 1+0 records out 00:06:37.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210368 s, 19.5 MB/s 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:37.950 13:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.950 13:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.950 13:34:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.950 /dev/nbd1 00:06:37.950 13:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.950 13:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.950 1+0 records in 00:06:37.950 1+0 records out 00:06:37.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238822 s, 17.2 MB/s 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:06:37.950 13:34:30 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:06:37.950 13:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.950 13:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.950 13:34:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.950 13:34:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.950 13:34:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.209 13:34:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.209 { 00:06:38.209 "nbd_device": "/dev/nbd0", 00:06:38.209 "bdev_name": "Malloc0" 00:06:38.209 }, 00:06:38.209 { 00:06:38.209 "nbd_device": "/dev/nbd1", 00:06:38.209 "bdev_name": "Malloc1" 00:06:38.209 } 00:06:38.209 ]' 00:06:38.209 13:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.209 { 00:06:38.209 "nbd_device": "/dev/nbd0", 00:06:38.209 "bdev_name": "Malloc0" 00:06:38.209 }, 00:06:38.209 { 00:06:38.209 "nbd_device": "/dev/nbd1", 00:06:38.209 "bdev_name": "Malloc1" 00:06:38.209 } 00:06:38.209 ]' 00:06:38.209 13:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.468 /dev/nbd1' 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.468 /dev/nbd1' 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.468 256+0 records in 00:06:38.468 256+0 records out 00:06:38.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010247 s, 102 MB/s 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.468 256+0 records in 00:06:38.468 256+0 records out 00:06:38.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212993 s, 49.2 MB/s 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.468 256+0 records in 00:06:38.468 256+0 records out 00:06:38.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233521 s, 44.9 MB/s 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.468 13:34:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.727 13:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.727 13:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.727 13:34:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.727 13:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.727 13:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.727 13:34:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.727 13:34:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.727 13:34:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.727 13:34:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.727 13:34:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.986 13:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.986 13:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.986 13:34:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.986 13:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.986 13:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.986 13:34:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.986 13:34:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.986 13:34:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.986 13:34:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.986 13:34:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.986 13:34:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.245 13:34:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.245 13:34:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.245 13:34:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.245 13:34:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.245 13:34:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.245 13:34:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.245 13:34:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.245 13:34:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.245 13:34:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.245 13:34:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.245 13:34:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.245 13:34:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.245 13:34:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.503 13:34:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.762 [2024-06-11 13:34:32.532696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.762 [2024-06-11 13:34:32.626110] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.762 [2024-06-11 13:34:32.626114] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.021 [2024-06-11 13:34:32.675711] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.022 [2024-06-11 13:34:32.675759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.557 13:34:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3423771 /var/tmp/spdk-nbd.sock 00:06:42.557 13:34:35 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 3423771 ']' 00:06:42.557 13:34:35 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.557 13:34:35 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:42.557 13:34:35 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.557 13:34:35 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:42.557 13:34:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:06:42.816 13:34:35 event.app_repeat -- event/event.sh@39 -- # killprocess 3423771 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 3423771 ']' 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 3423771 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3423771 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3423771' 00:06:42.816 killing process with pid 3423771 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@968 -- # kill 3423771 00:06:42.816 13:34:35 event.app_repeat -- common/autotest_common.sh@973 -- # wait 3423771 00:06:43.075 spdk_app_start is called in Round 0. 00:06:43.075 Shutdown signal received, stop current app iteration 00:06:43.075 Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 reinitialization... 00:06:43.075 spdk_app_start is called in Round 1. 00:06:43.075 Shutdown signal received, stop current app iteration 00:06:43.075 Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 reinitialization... 00:06:43.075 spdk_app_start is called in Round 2. 00:06:43.075 Shutdown signal received, stop current app iteration 00:06:43.075 Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 reinitialization... 00:06:43.075 spdk_app_start is called in Round 3. 00:06:43.075 Shutdown signal received, stop current app iteration 00:06:43.075 13:34:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:43.075 13:34:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:43.075 00:06:43.075 real 0m17.039s 00:06:43.075 user 0m37.038s 00:06:43.075 sys 0m3.220s 00:06:43.075 13:34:35 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:43.075 13:34:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.075 ************************************ 00:06:43.075 END TEST app_repeat 00:06:43.075 ************************************ 00:06:43.075 13:34:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:43.075 13:34:35 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:43.075 13:34:35 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:43.075 13:34:35 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:43.075 13:34:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.075 ************************************ 00:06:43.075 START TEST cpu_locks 00:06:43.075 ************************************ 00:06:43.075 13:34:35 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:43.075 * Looking for test storage... 00:06:43.075 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:43.075 13:34:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:43.075 13:34:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:43.075 13:34:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:43.075 13:34:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:43.075 13:34:35 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:43.075 13:34:35 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:43.075 13:34:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.075 ************************************ 00:06:43.075 START TEST default_locks 00:06:43.075 ************************************ 00:06:43.075 13:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:06:43.075 13:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3426612 00:06:43.075 13:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3426612 00:06:43.075 13:34:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.075 13:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 3426612 ']' 00:06:43.075 13:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.075 13:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:43.075 13:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.075 13:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:43.334 13:34:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.334 [2024-06-11 13:34:36.006442] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:43.334 [2024-06-11 13:34:36.006499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426612 ] 00:06:43.334 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.334 [2024-06-11 13:34:36.082566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.334 [2024-06-11 13:34:36.182960] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.593 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:43.593 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:06:43.593 13:34:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3426612 00:06:43.593 13:34:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3426612 00:06:43.593 13:34:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.162 lslocks: write error 00:06:44.162 13:34:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3426612 00:06:44.162 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 3426612 ']' 00:06:44.162 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 3426612 00:06:44.162 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:06:44.162 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:44.162 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3426612 00:06:44.162 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:44.162 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:44.162 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3426612' 00:06:44.162 killing process with pid 3426612 00:06:44.162 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 3426612 00:06:44.162 13:34:36 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 3426612 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3426612 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3426612 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 3426612 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 3426612 ']' 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.420 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3426612) - No such process 00:06:44.420 ERROR: process (pid: 3426612) is no longer running 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:44.420 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:44.421 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:44.421 13:34:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:44.421 13:34:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.421 13:34:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.421 13:34:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.421 00:06:44.421 real 0m1.289s 00:06:44.421 user 0m1.271s 00:06:44.421 sys 0m0.572s 00:06:44.421 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:44.421 13:34:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.421 ************************************ 00:06:44.421 END TEST default_locks 00:06:44.421 ************************************ 00:06:44.421 13:34:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:44.421 13:34:37 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:44.421 13:34:37 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.421 13:34:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.680 ************************************ 00:06:44.680 START TEST default_locks_via_rpc 00:06:44.680 ************************************ 00:06:44.680 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:06:44.680 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3426854 00:06:44.680 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3426854 00:06:44.680 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3426854 ']' 00:06:44.680 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.680 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.680 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:44.680 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.680 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:44.680 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.680 [2024-06-11 13:34:37.355948] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:44.680 [2024-06-11 13:34:37.356013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3426854 ] 00:06:44.680 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.680 [2024-06-11 13:34:37.432703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.680 [2024-06-11 13:34:37.534304] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3426854 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3426854 00:06:44.939 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.198 13:34:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3426854 00:06:45.198 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 3426854 ']' 00:06:45.198 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 3426854 00:06:45.198 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:06:45.198 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:45.198 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3426854 00:06:45.198 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:45.198 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:45.198 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3426854' 00:06:45.198 killing process with pid 3426854 00:06:45.198 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 3426854 00:06:45.198 13:34:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 3426854 00:06:45.457 00:06:45.457 real 0m0.987s 00:06:45.457 user 0m0.948s 00:06:45.457 sys 0m0.422s 00:06:45.457 13:34:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:45.457 13:34:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.457 ************************************ 00:06:45.457 END TEST default_locks_via_rpc 00:06:45.457 ************************************ 00:06:45.457 13:34:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:45.457 13:34:38 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:45.457 13:34:38 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:45.457 13:34:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.716 ************************************ 00:06:45.716 START TEST non_locking_app_on_locked_coremask 00:06:45.716 ************************************ 00:06:45.716 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:06:45.716 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3427061 00:06:45.716 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3427061 /var/tmp/spdk.sock 00:06:45.716 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.716 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3427061 ']' 00:06:45.716 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.716 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:45.717 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.717 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:45.717 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.717 [2024-06-11 13:34:38.410223] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:45.717 [2024-06-11 13:34:38.410291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427061 ] 00:06:45.717 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.717 [2024-06-11 13:34:38.488432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.717 [2024-06-11 13:34:38.586594] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.976 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:45.976 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:45.976 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3427174 00:06:45.976 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3427174 /var/tmp/spdk2.sock 00:06:45.976 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:45.976 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3427174 ']' 00:06:45.976 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.976 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:45.976 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.976 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:45.976 13:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.976 [2024-06-11 13:34:38.829780] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:45.976 [2024-06-11 13:34:38.829866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427174 ] 00:06:45.976 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.234 [2024-06-11 13:34:38.933355] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.234 [2024-06-11 13:34:38.933389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.234 [2024-06-11 13:34:39.136016] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.182 13:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:47.182 13:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:47.182 13:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3427061 00:06:47.182 13:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3427061 00:06:47.182 13:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.441 lslocks: write error 00:06:47.441 13:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3427061 00:06:47.441 13:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3427061 ']' 00:06:47.441 13:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3427061 00:06:47.441 13:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:47.700 13:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:47.700 13:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3427061 00:06:47.700 13:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:47.700 13:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:47.700 13:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3427061' 00:06:47.700 killing process with pid 3427061 00:06:47.700 13:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3427061 00:06:47.700 13:34:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3427061 00:06:48.269 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3427174 00:06:48.269 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3427174 ']' 00:06:48.269 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3427174 00:06:48.269 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:48.269 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:48.269 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3427174 00:06:48.269 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:48.269 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:48.269 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3427174' 00:06:48.269 killing process with pid 3427174 00:06:48.269 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3427174 00:06:48.269 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3427174 00:06:48.838 00:06:48.838 real 0m3.135s 00:06:48.838 user 0m3.371s 00:06:48.838 sys 0m1.028s 00:06:48.838 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:48.838 13:34:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.838 ************************************ 00:06:48.838 END TEST non_locking_app_on_locked_coremask 00:06:48.838 ************************************ 00:06:48.838 13:34:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:48.838 13:34:41 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:48.838 13:34:41 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:48.838 13:34:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.838 ************************************ 00:06:48.838 START TEST locking_app_on_unlocked_coremask 00:06:48.838 ************************************ 00:06:48.838 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:06:48.838 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3427677 00:06:48.838 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3427677 /var/tmp/spdk.sock 00:06:48.838 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:48.838 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3427677 ']' 00:06:48.838 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.838 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:48.838 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.838 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:48.838 13:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.838 [2024-06-11 13:34:41.619276] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:48.838 [2024-06-11 13:34:41.619351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427677 ] 00:06:48.838 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.838 [2024-06-11 13:34:41.695927] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.838 [2024-06-11 13:34:41.695959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.097 [2024-06-11 13:34:41.789791] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.356 13:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:49.356 13:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:49.356 13:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3427734 00:06:49.356 13:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3427734 /var/tmp/spdk2.sock 00:06:49.356 13:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.356 13:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3427734 ']' 00:06:49.356 13:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.356 13:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:49.356 13:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.356 13:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:49.356 13:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.356 [2024-06-11 13:34:42.035908] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:49.356 [2024-06-11 13:34:42.035975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3427734 ] 00:06:49.356 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.356 [2024-06-11 13:34:42.139538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.615 [2024-06-11 13:34:42.337963] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.183 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:50.183 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:50.183 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3427734 00:06:50.183 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3427734 00:06:50.183 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.752 lslocks: write error 00:06:50.752 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3427677 00:06:50.752 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3427677 ']' 00:06:50.752 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 3427677 00:06:50.752 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:50.752 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:50.752 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3427677 00:06:50.752 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:50.752 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:50.752 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3427677' 00:06:50.752 killing process with pid 3427677 00:06:50.752 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 3427677 00:06:50.752 13:34:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 3427677 00:06:51.691 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3427734 00:06:51.691 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3427734 ']' 00:06:51.691 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 3427734 00:06:51.691 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:51.691 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:51.691 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3427734 00:06:51.691 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:51.691 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:51.691 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3427734' 00:06:51.691 killing process with pid 3427734 00:06:51.691 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 3427734 00:06:51.691 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 3427734 00:06:51.950 00:06:51.950 real 0m3.108s 00:06:51.950 user 0m3.316s 00:06:51.950 sys 0m1.048s 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.950 ************************************ 00:06:51.950 END TEST locking_app_on_unlocked_coremask 00:06:51.950 ************************************ 00:06:51.950 13:34:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:51.950 13:34:44 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:51.950 13:34:44 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.950 13:34:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.950 ************************************ 00:06:51.950 START TEST locking_app_on_locked_coremask 00:06:51.950 ************************************ 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3428198 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3428198 /var/tmp/spdk.sock 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3428198 ']' 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:51.950 13:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.950 [2024-06-11 13:34:44.791564] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:51.951 [2024-06-11 13:34:44.791638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428198 ] 00:06:51.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.210 [2024-06-11 13:34:44.868070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.210 [2024-06-11 13:34:44.971205] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3428207 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3428207 /var/tmp/spdk2.sock 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3428207 /var/tmp/spdk2.sock 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3428207 /var/tmp/spdk2.sock 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 3428207 ']' 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:52.469 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.469 [2024-06-11 13:34:45.237544] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:52.469 [2024-06-11 13:34:45.237616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428207 ] 00:06:52.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.469 [2024-06-11 13:34:45.333733] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3428198 has claimed it. 00:06:52.469 [2024-06-11 13:34:45.333772] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.483 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3428207) - No such process 00:06:53.483 ERROR: process (pid: 3428207) is no longer running 00:06:53.483 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:53.483 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:53.483 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:53.483 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:53.483 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:53.483 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:53.483 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3428198 00:06:53.483 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3428198 00:06:53.483 13:34:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.483 lslocks: write error 00:06:53.483 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3428198 00:06:53.483 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 3428198 ']' 00:06:53.483 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 3428198 00:06:53.483 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:06:53.742 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:53.742 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3428198 00:06:53.742 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:53.742 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:53.742 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3428198' 00:06:53.742 killing process with pid 3428198 00:06:53.742 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 3428198 00:06:53.742 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 3428198 00:06:54.001 00:06:54.001 real 0m2.041s 00:06:54.001 user 0m2.229s 00:06:54.001 sys 0m0.694s 00:06:54.001 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:54.001 13:34:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.001 ************************************ 00:06:54.001 END TEST locking_app_on_locked_coremask 00:06:54.001 ************************************ 00:06:54.001 13:34:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:54.001 13:34:46 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:54.001 13:34:46 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:54.001 13:34:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.001 ************************************ 00:06:54.001 START TEST locking_overlapped_coremask 00:06:54.001 ************************************ 00:06:54.001 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:06:54.001 13:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3428534 00:06:54.001 13:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3428534 /var/tmp/spdk.sock 00:06:54.001 13:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:54.001 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 3428534 ']' 00:06:54.001 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.001 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:54.001 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.001 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:54.001 13:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.001 [2024-06-11 13:34:46.904524] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:54.001 [2024-06-11 13:34:46.904591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428534 ] 00:06:54.260 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.260 [2024-06-11 13:34:46.984739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.260 [2024-06-11 13:34:47.080601] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.260 [2024-06-11 13:34:47.080694] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.260 [2024-06-11 13:34:47.080698] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.196 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:55.196 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:06:55.196 13:34:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3428671 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3428671 /var/tmp/spdk2.sock 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 3428671 /var/tmp/spdk2.sock 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 3428671 /var/tmp/spdk2.sock 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 3428671 ']' 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:55.197 13:34:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.197 [2024-06-11 13:34:47.851297] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:55.197 [2024-06-11 13:34:47.851361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428671 ] 00:06:55.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.197 [2024-06-11 13:34:47.928256] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3428534 has claimed it. 00:06:55.197 [2024-06-11 13:34:47.928291] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.765 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (3428671) - No such process 00:06:55.765 ERROR: process (pid: 3428671) is no longer running 00:06:55.765 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:55.765 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:06:55.765 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:06:55.765 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:55.765 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:55.765 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:55.765 13:34:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:55.765 13:34:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3428534 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 3428534 ']' 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 3428534 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3428534 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3428534' 00:06:55.766 killing process with pid 3428534 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 3428534 00:06:55.766 13:34:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 3428534 00:06:56.334 00:06:56.334 real 0m2.126s 00:06:56.335 user 0m6.097s 00:06:56.335 sys 0m0.445s 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.335 ************************************ 00:06:56.335 END TEST locking_overlapped_coremask 00:06:56.335 ************************************ 00:06:56.335 13:34:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:56.335 13:34:49 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:56.335 13:34:49 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:56.335 13:34:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.335 ************************************ 00:06:56.335 START TEST locking_overlapped_coremask_via_rpc 00:06:56.335 ************************************ 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3428917 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3428917 /var/tmp/spdk.sock 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3428917 ']' 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:56.335 13:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.335 [2024-06-11 13:34:49.103870] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:56.335 [2024-06-11 13:34:49.103932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3428917 ] 00:06:56.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.335 [2024-06-11 13:34:49.173576] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.335 [2024-06-11 13:34:49.173612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.594 [2024-06-11 13:34:49.276209] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.594 [2024-06-11 13:34:49.276322] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.594 [2024-06-11 13:34:49.276327] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.162 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:57.162 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:57.162 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3429133 00:06:57.162 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3429133 /var/tmp/spdk2.sock 00:06:57.162 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:57.162 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3429133 ']' 00:06:57.162 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.162 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:57.162 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.162 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:57.162 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.162 [2024-06-11 13:34:50.037528] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:57.162 [2024-06-11 13:34:50.037602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429133 ] 00:06:57.162 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.422 [2024-06-11 13:34:50.116376] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.422 [2024-06-11 13:34:50.116407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.422 [2024-06-11 13:34:50.269900] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.422 [2024-06-11 13:34:50.273249] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.422 [2024-06-11 13:34:50.273249] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:06:58.359 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:58.359 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:58.359 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.360 [2024-06-11 13:34:50.957258] app.c: 772:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3428917 has claimed it. 00:06:58.360 request: 00:06:58.360 { 00:06:58.360 "method": "framework_enable_cpumask_locks", 00:06:58.360 "req_id": 1 00:06:58.360 } 00:06:58.360 Got JSON-RPC error response 00:06:58.360 response: 00:06:58.360 { 00:06:58.360 "code": -32603, 00:06:58.360 "message": "Failed to claim CPU core: 2" 00:06:58.360 } 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3428917 /var/tmp/spdk.sock 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3428917 ']' 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:58.360 13:34:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.360 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:58.360 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:58.360 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3429133 /var/tmp/spdk2.sock 00:06:58.360 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 3429133 ']' 00:06:58.360 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.360 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:58.360 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.360 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:58.360 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.618 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:58.618 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:58.618 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:58.618 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.618 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.618 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.618 00:06:58.618 real 0m2.421s 00:06:58.618 user 0m1.155s 00:06:58.618 sys 0m0.197s 00:06:58.618 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:58.618 13:34:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.618 ************************************ 00:06:58.618 END TEST locking_overlapped_coremask_via_rpc 00:06:58.618 ************************************ 00:06:58.876 13:34:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:58.876 13:34:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3428917 ]] 00:06:58.876 13:34:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3428917 00:06:58.876 13:34:51 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3428917 ']' 00:06:58.876 13:34:51 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3428917 00:06:58.876 13:34:51 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:58.876 13:34:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:58.876 13:34:51 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3428917 00:06:58.876 13:34:51 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:58.876 13:34:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:58.876 13:34:51 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3428917' 00:06:58.876 killing process with pid 3428917 00:06:58.876 13:34:51 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 3428917 00:06:58.876 13:34:51 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 3428917 00:06:59.134 13:34:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3429133 ]] 00:06:59.134 13:34:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3429133 00:06:59.134 13:34:51 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3429133 ']' 00:06:59.134 13:34:51 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3429133 00:06:59.134 13:34:51 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:06:59.134 13:34:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:59.134 13:34:51 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3429133 00:06:59.134 13:34:51 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:06:59.134 13:34:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:06:59.134 13:34:51 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3429133' 00:06:59.134 killing process with pid 3429133 00:06:59.134 13:34:51 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 3429133 00:06:59.134 13:34:51 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 3429133 00:06:59.699 13:34:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.699 13:34:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:59.699 13:34:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3428917 ]] 00:06:59.699 13:34:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3428917 00:06:59.699 13:34:52 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3428917 ']' 00:06:59.699 13:34:52 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3428917 00:06:59.699 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3428917) - No such process 00:06:59.699 13:34:52 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 3428917 is not found' 00:06:59.699 Process with pid 3428917 is not found 00:06:59.699 13:34:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3429133 ]] 00:06:59.699 13:34:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3429133 00:06:59.699 13:34:52 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 3429133 ']' 00:06:59.699 13:34:52 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 3429133 00:06:59.699 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (3429133) - No such process 00:06:59.699 13:34:52 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 3429133 is not found' 00:06:59.699 Process with pid 3429133 is not found 00:06:59.699 13:34:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:59.699 00:06:59.699 real 0m16.474s 00:06:59.699 user 0m30.601s 00:06:59.699 sys 0m5.340s 00:06:59.699 13:34:52 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:59.699 13:34:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.699 ************************************ 00:06:59.699 END TEST cpu_locks 00:06:59.699 ************************************ 00:06:59.699 00:06:59.699 real 0m42.217s 00:06:59.699 user 1m22.113s 00:06:59.699 sys 0m9.561s 00:06:59.699 13:34:52 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:59.699 13:34:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.699 ************************************ 00:06:59.699 END TEST event 00:06:59.699 ************************************ 00:06:59.699 13:34:52 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:59.699 13:34:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:59.699 13:34:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:59.699 13:34:52 -- common/autotest_common.sh@10 -- # set +x 00:06:59.699 ************************************ 00:06:59.699 START TEST thread 00:06:59.699 ************************************ 00:06:59.699 13:34:52 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:59.699 * Looking for test storage... 00:06:59.699 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:59.699 13:34:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.699 13:34:52 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:59.699 13:34:52 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:59.699 13:34:52 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.699 ************************************ 00:06:59.699 START TEST thread_poller_perf 00:06:59.699 ************************************ 00:06:59.699 13:34:52 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:59.699 [2024-06-11 13:34:52.560136] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:06:59.699 [2024-06-11 13:34:52.560188] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429652 ] 00:06:59.699 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.958 [2024-06-11 13:34:52.636861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.958 [2024-06-11 13:34:52.729513] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.958 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:01.338 ====================================== 00:07:01.338 busy:2107209308 (cyc) 00:07:01.338 total_run_count: 492000 00:07:01.338 tsc_hz: 2100000000 (cyc) 00:07:01.338 ====================================== 00:07:01.338 poller_cost: 4282 (cyc), 2039 (nsec) 00:07:01.338 00:07:01.338 real 0m1.271s 00:07:01.338 user 0m1.177s 00:07:01.338 sys 0m0.087s 00:07:01.338 13:34:53 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:01.338 13:34:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.338 ************************************ 00:07:01.338 END TEST thread_poller_perf 00:07:01.338 ************************************ 00:07:01.338 13:34:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.338 13:34:53 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:01.338 13:34:53 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:01.338 13:34:53 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.338 ************************************ 00:07:01.338 START TEST thread_poller_perf 00:07:01.338 ************************************ 00:07:01.338 13:34:53 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.338 [2024-06-11 13:34:53.900776] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:01.338 [2024-06-11 13:34:53.900867] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429885 ] 00:07:01.338 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.338 [2024-06-11 13:34:53.980107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.338 [2024-06-11 13:34:54.077081] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.338 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:02.275 ====================================== 00:07:02.275 busy:2101595240 (cyc) 00:07:02.275 total_run_count: 7671000 00:07:02.275 tsc_hz: 2100000000 (cyc) 00:07:02.275 ====================================== 00:07:02.275 poller_cost: 273 (cyc), 130 (nsec) 00:07:02.275 00:07:02.275 real 0m1.278s 00:07:02.275 user 0m1.181s 00:07:02.275 sys 0m0.091s 00:07:02.275 13:34:55 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:02.275 13:34:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.275 ************************************ 00:07:02.275 END TEST thread_poller_perf 00:07:02.275 ************************************ 00:07:02.533 13:34:55 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:02.533 13:34:55 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:02.533 13:34:55 thread -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:02.533 13:34:55 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:02.533 13:34:55 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.533 ************************************ 00:07:02.533 START TEST thread_spdk_lock 00:07:02.533 ************************************ 00:07:02.533 13:34:55 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:02.533 [2024-06-11 13:34:55.249257] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:02.533 [2024-06-11 13:34:55.249363] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430121 ] 00:07:02.533 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.533 [2024-06-11 13:34:55.330064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.533 [2024-06-11 13:34:55.430218] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.533 [2024-06-11 13:34:55.430225] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.101 [2024-06-11 13:34:55.949821] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:03.101 [2024-06-11 13:34:55.949865] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:03.101 [2024-06-11 13:34:55.949877] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x14cb6c0 00:07:03.101 [2024-06-11 13:34:55.950987] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:03.101 [2024-06-11 13:34:55.951092] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:03.101 [2024-06-11 13:34:55.951116] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:03.359 Starting test contend 00:07:03.359 Worker Delay Wait us Hold us Total us 00:07:03.359 0 3 154439 199326 353765 00:07:03.359 1 5 86353 298329 384683 00:07:03.359 PASS test contend 00:07:03.359 Starting test hold_by_poller 00:07:03.359 PASS test hold_by_poller 00:07:03.359 Starting test hold_by_message 00:07:03.359 PASS test hold_by_message 00:07:03.359 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:03.359 100014 assertions passed 00:07:03.359 0 assertions failed 00:07:03.359 00:07:03.359 real 0m0.801s 00:07:03.359 user 0m1.222s 00:07:03.359 sys 0m0.096s 00:07:03.359 13:34:56 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:03.359 13:34:56 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:07:03.359 ************************************ 00:07:03.359 END TEST thread_spdk_lock 00:07:03.359 ************************************ 00:07:03.359 00:07:03.359 real 0m3.639s 00:07:03.359 user 0m3.708s 00:07:03.359 sys 0m0.456s 00:07:03.359 13:34:56 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:03.360 13:34:56 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.360 ************************************ 00:07:03.360 END TEST thread 00:07:03.360 ************************************ 00:07:03.360 13:34:56 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:07:03.360 13:34:56 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:03.360 13:34:56 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:03.360 13:34:56 -- common/autotest_common.sh@10 -- # set +x 00:07:03.360 ************************************ 00:07:03.360 START TEST accel 00:07:03.360 ************************************ 00:07:03.360 13:34:56 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:07:03.360 * Looking for test storage... 00:07:03.360 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:03.360 13:34:56 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:03.360 13:34:56 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:03.360 13:34:56 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:03.360 13:34:56 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3430200 00:07:03.360 13:34:56 accel -- accel/accel.sh@63 -- # waitforlisten 3430200 00:07:03.360 13:34:56 accel -- common/autotest_common.sh@830 -- # '[' -z 3430200 ']' 00:07:03.360 13:34:56 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.360 13:34:56 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:03.360 13:34:56 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:03.360 13:34:56 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:03.360 13:34:56 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.360 13:34:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.360 13:34:56 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:03.360 13:34:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.360 13:34:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.360 13:34:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.360 13:34:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.360 13:34:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.360 13:34:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:03.360 13:34:56 accel -- accel/accel.sh@41 -- # jq -r . 00:07:03.360 [2024-06-11 13:34:56.240769] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:03.360 [2024-06-11 13:34:56.240834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430200 ] 00:07:03.360 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.619 [2024-06-11 13:34:56.310585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.619 [2024-06-11 13:34:56.409115] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.186 13:34:57 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:04.186 13:34:57 accel -- common/autotest_common.sh@863 -- # return 0 00:07:04.186 13:34:57 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:04.186 13:34:57 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:04.186 13:34:57 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:04.186 13:34:57 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:04.186 13:34:57 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:04.186 13:34:57 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:04.186 13:34:57 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:04.186 13:34:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.186 13:34:57 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:04.186 13:34:57 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # IFS== 00:07:04.446 13:34:57 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:04.446 13:34:57 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:04.446 13:34:57 accel -- accel/accel.sh@75 -- # killprocess 3430200 00:07:04.446 13:34:57 accel -- common/autotest_common.sh@949 -- # '[' -z 3430200 ']' 00:07:04.446 13:34:57 accel -- common/autotest_common.sh@953 -- # kill -0 3430200 00:07:04.446 13:34:57 accel -- common/autotest_common.sh@954 -- # uname 00:07:04.446 13:34:57 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:04.446 13:34:57 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3430200 00:07:04.446 13:34:57 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:04.446 13:34:57 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:04.446 13:34:57 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3430200' 00:07:04.446 killing process with pid 3430200 00:07:04.446 13:34:57 accel -- common/autotest_common.sh@968 -- # kill 3430200 00:07:04.446 13:34:57 accel -- common/autotest_common.sh@973 -- # wait 3430200 00:07:04.705 13:34:57 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:04.705 13:34:57 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:04.705 13:34:57 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:04.705 13:34:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:04.705 13:34:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.705 13:34:57 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:07:04.705 13:34:57 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:04.705 13:34:57 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:04.705 13:34:57 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.705 13:34:57 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.705 13:34:57 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.705 13:34:57 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.705 13:34:57 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.705 13:34:57 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:04.705 13:34:57 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:04.705 13:34:57 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:04.705 13:34:57 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:04.705 13:34:57 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:04.705 13:34:57 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:04.705 13:34:57 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:04.705 13:34:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.964 ************************************ 00:07:04.964 START TEST accel_missing_filename 00:07:04.964 ************************************ 00:07:04.964 13:34:57 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:07:04.964 13:34:57 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:07:04.964 13:34:57 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:04.964 13:34:57 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:04.964 13:34:57 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:04.964 13:34:57 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:04.964 13:34:57 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:04.964 13:34:57 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:07:04.964 13:34:57 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:04.964 13:34:57 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:04.964 13:34:57 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.964 13:34:57 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.964 13:34:57 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.964 13:34:57 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.964 13:34:57 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.964 13:34:57 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:04.964 13:34:57 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:04.964 [2024-06-11 13:34:57.670598] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:04.964 [2024-06-11 13:34:57.670675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430494 ] 00:07:04.964 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.964 [2024-06-11 13:34:57.751551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.964 [2024-06-11 13:34:57.851070] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.223 [2024-06-11 13:34:57.900482] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.223 [2024-06-11 13:34:57.973884] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:05.223 A filename is required. 00:07:05.223 13:34:58 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:07:05.223 13:34:58 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:05.223 13:34:58 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:07:05.223 13:34:58 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:07:05.223 13:34:58 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:07:05.223 13:34:58 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:05.223 00:07:05.223 real 0m0.411s 00:07:05.223 user 0m0.303s 00:07:05.223 sys 0m0.149s 00:07:05.223 13:34:58 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:05.223 13:34:58 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:05.223 ************************************ 00:07:05.223 END TEST accel_missing_filename 00:07:05.223 ************************************ 00:07:05.223 13:34:58 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:05.223 13:34:58 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:05.223 13:34:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:05.223 13:34:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.223 ************************************ 00:07:05.223 START TEST accel_compress_verify 00:07:05.223 ************************************ 00:07:05.223 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:05.223 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:07:05.223 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:05.223 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:05.223 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:05.223 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:05.223 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:05.224 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:05.224 13:34:58 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:05.224 13:34:58 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:05.224 13:34:58 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.224 13:34:58 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.482 13:34:58 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.482 13:34:58 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.482 13:34:58 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.482 13:34:58 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:05.482 13:34:58 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:05.482 [2024-06-11 13:34:58.152442] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:05.482 [2024-06-11 13:34:58.152515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430671 ] 00:07:05.483 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.483 [2024-06-11 13:34:58.235344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.483 [2024-06-11 13:34:58.338950] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.483 [2024-06-11 13:34:58.388311] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.742 [2024-06-11 13:34:58.461539] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:05.742 00:07:05.742 Compression does not support the verify option, aborting. 00:07:05.742 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:07:05.742 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:05.742 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:07:05.742 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:07:05.742 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:07:05.742 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:05.742 00:07:05.742 real 0m0.419s 00:07:05.742 user 0m0.308s 00:07:05.742 sys 0m0.154s 00:07:05.742 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:05.742 13:34:58 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:05.742 ************************************ 00:07:05.742 END TEST accel_compress_verify 00:07:05.742 ************************************ 00:07:05.742 13:34:58 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:05.742 13:34:58 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:05.742 13:34:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:05.742 13:34:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.742 ************************************ 00:07:05.742 START TEST accel_wrong_workload 00:07:05.742 ************************************ 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:07:05.742 13:34:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:05.742 13:34:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:05.742 13:34:58 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.742 13:34:58 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.742 13:34:58 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.742 13:34:58 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.742 13:34:58 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.742 13:34:58 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:05.742 13:34:58 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:05.742 Unsupported workload type: foobar 00:07:05.742 [2024-06-11 13:34:58.634597] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:05.742 accel_perf options: 00:07:05.742 [-h help message] 00:07:05.742 [-q queue depth per core] 00:07:05.742 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:05.742 [-T number of threads per core 00:07:05.742 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:05.742 [-t time in seconds] 00:07:05.742 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:05.742 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:05.742 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:05.742 [-l for compress/decompress workloads, name of uncompressed input file 00:07:05.742 [-S for crc32c workload, use this seed value (default 0) 00:07:05.742 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:05.742 [-f for fill workload, use this BYTE value (default 255) 00:07:05.742 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:05.742 [-y verify result if this switch is on] 00:07:05.742 [-a tasks to allocate per core (default: same value as -q)] 00:07:05.742 Can be used to spread operations across a wider range of memory. 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:05.742 00:07:05.742 real 0m0.024s 00:07:05.742 user 0m0.011s 00:07:05.742 sys 0m0.013s 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:05.742 13:34:58 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:05.742 ************************************ 00:07:05.742 END TEST accel_wrong_workload 00:07:05.742 ************************************ 00:07:06.002 Error: writing output failed: Broken pipe 00:07:06.002 13:34:58 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:06.002 13:34:58 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:07:06.002 13:34:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:06.002 13:34:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 ************************************ 00:07:06.002 START TEST accel_negative_buffers 00:07:06.002 ************************************ 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:07:06.002 13:34:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:06.002 13:34:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:06.002 13:34:58 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.002 13:34:58 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.002 13:34:58 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.002 13:34:58 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.002 13:34:58 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.002 13:34:58 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:06.002 13:34:58 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:06.002 -x option must be non-negative. 00:07:06.002 [2024-06-11 13:34:58.727483] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:06.002 accel_perf options: 00:07:06.002 [-h help message] 00:07:06.002 [-q queue depth per core] 00:07:06.002 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:06.002 [-T number of threads per core 00:07:06.002 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:06.002 [-t time in seconds] 00:07:06.002 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:06.002 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:06.002 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:06.002 [-l for compress/decompress workloads, name of uncompressed input file 00:07:06.002 [-S for crc32c workload, use this seed value (default 0) 00:07:06.002 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:06.002 [-f for fill workload, use this BYTE value (default 255) 00:07:06.002 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:06.002 [-y verify result if this switch is on] 00:07:06.002 [-a tasks to allocate per core (default: same value as -q)] 00:07:06.002 Can be used to spread operations across a wider range of memory. 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:06.002 00:07:06.002 real 0m0.026s 00:07:06.002 user 0m0.009s 00:07:06.002 sys 0m0.017s 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:06.002 13:34:58 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 ************************************ 00:07:06.002 END TEST accel_negative_buffers 00:07:06.002 ************************************ 00:07:06.002 Error: writing output failed: Broken pipe 00:07:06.002 13:34:58 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:06.002 13:34:58 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:06.002 13:34:58 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:06.002 13:34:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.002 ************************************ 00:07:06.002 START TEST accel_crc32c 00:07:06.002 ************************************ 00:07:06.002 13:34:58 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:06.002 13:34:58 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:06.002 [2024-06-11 13:34:58.817438] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:06.002 [2024-06-11 13:34:58.817499] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430738 ] 00:07:06.002 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.002 [2024-06-11 13:34:58.894660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.262 [2024-06-11 13:34:58.997588] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.262 13:34:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:07.642 13:35:00 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.642 00:07:07.642 real 0m1.412s 00:07:07.642 user 0m1.268s 00:07:07.642 sys 0m0.155s 00:07:07.642 13:35:00 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:07.642 13:35:00 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:07.642 ************************************ 00:07:07.642 END TEST accel_crc32c 00:07:07.642 ************************************ 00:07:07.642 13:35:00 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:07.642 13:35:00 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:07.642 13:35:00 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:07.642 13:35:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.642 ************************************ 00:07:07.642 START TEST accel_crc32c_C2 00:07:07.642 ************************************ 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:07.642 [2024-06-11 13:35:00.296152] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:07.642 [2024-06-11 13:35:00.296230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430983 ] 00:07:07.642 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.642 [2024-06-11 13:35:00.374473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.642 [2024-06-11 13:35:00.471411] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:07.642 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.643 13:35:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.022 00:07:09.022 real 0m1.392s 00:07:09.022 user 0m1.264s 00:07:09.022 sys 0m0.140s 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:09.022 13:35:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:09.022 ************************************ 00:07:09.022 END TEST accel_crc32c_C2 00:07:09.022 ************************************ 00:07:09.022 13:35:01 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:09.022 13:35:01 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:09.022 13:35:01 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:09.022 13:35:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.022 ************************************ 00:07:09.022 START TEST accel_copy 00:07:09.022 ************************************ 00:07:09.022 13:35:01 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:09.022 13:35:01 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:09.022 [2024-06-11 13:35:01.751250] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:09.022 [2024-06-11 13:35:01.751325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3431268 ] 00:07:09.022 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.022 [2024-06-11 13:35:01.831029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.022 [2024-06-11 13:35:01.928239] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.282 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.283 13:35:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:10.660 13:35:03 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.660 00:07:10.660 real 0m1.413s 00:07:10.660 user 0m1.273s 00:07:10.660 sys 0m0.150s 00:07:10.660 13:35:03 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:10.660 13:35:03 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:10.660 ************************************ 00:07:10.660 END TEST accel_copy 00:07:10.660 ************************************ 00:07:10.660 13:35:03 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:10.660 13:35:03 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:10.660 13:35:03 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:10.660 13:35:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.660 ************************************ 00:07:10.660 START TEST accel_fill 00:07:10.660 ************************************ 00:07:10.660 13:35:03 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:10.660 13:35:03 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:10.660 [2024-06-11 13:35:03.226973] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:10.660 [2024-06-11 13:35:03.227044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3431559 ] 00:07:10.660 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.660 [2024-06-11 13:35:03.305567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.661 [2024-06-11 13:35:03.402566] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:10.661 13:35:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.038 13:35:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.039 13:35:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:12.039 13:35:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.039 00:07:12.039 real 0m1.411s 00:07:12.039 user 0m1.276s 00:07:12.039 sys 0m0.146s 00:07:12.039 13:35:04 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:12.039 13:35:04 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:12.039 ************************************ 00:07:12.039 END TEST accel_fill 00:07:12.039 ************************************ 00:07:12.039 13:35:04 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:12.039 13:35:04 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:12.039 13:35:04 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:12.039 13:35:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.039 ************************************ 00:07:12.039 START TEST accel_copy_crc32c 00:07:12.039 ************************************ 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:12.039 [2024-06-11 13:35:04.692373] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:12.039 [2024-06-11 13:35:04.692408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3431856 ] 00:07:12.039 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.039 [2024-06-11 13:35:04.755637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.039 [2024-06-11 13:35:04.851831] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.039 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.040 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.040 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:12.040 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:12.040 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:12.040 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:12.040 13:35:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.417 00:07:13.417 real 0m1.386s 00:07:13.417 user 0m1.270s 00:07:13.417 sys 0m0.129s 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:13.417 13:35:06 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:13.417 ************************************ 00:07:13.417 END TEST accel_copy_crc32c 00:07:13.417 ************************************ 00:07:13.417 13:35:06 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:13.417 13:35:06 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:13.417 13:35:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:13.417 13:35:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.417 ************************************ 00:07:13.417 START TEST accel_copy_crc32c_C2 00:07:13.417 ************************************ 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:13.417 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:13.417 [2024-06-11 13:35:06.153936] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:13.417 [2024-06-11 13:35:06.154002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432106 ] 00:07:13.417 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.417 [2024-06-11 13:35:06.230077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.417 [2024-06-11 13:35:06.327229] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.677 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:13.678 13:35:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.057 00:07:15.057 real 0m1.404s 00:07:15.057 user 0m1.281s 00:07:15.057 sys 0m0.134s 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:15.057 13:35:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:15.057 ************************************ 00:07:15.057 END TEST accel_copy_crc32c_C2 00:07:15.057 ************************************ 00:07:15.057 13:35:07 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:15.057 13:35:07 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:15.057 13:35:07 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:15.057 13:35:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.057 ************************************ 00:07:15.057 START TEST accel_dualcast 00:07:15.057 ************************************ 00:07:15.057 13:35:07 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:15.057 13:35:07 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:15.057 [2024-06-11 13:35:07.618833] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:15.057 [2024-06-11 13:35:07.618898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432346 ] 00:07:15.057 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.058 [2024-06-11 13:35:07.694913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.058 [2024-06-11 13:35:07.789326] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:15.058 13:35:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:16.437 13:35:08 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.437 00:07:16.437 real 0m1.396s 00:07:16.437 user 0m1.262s 00:07:16.437 sys 0m0.143s 00:07:16.437 13:35:08 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:16.437 13:35:08 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:16.437 ************************************ 00:07:16.437 END TEST accel_dualcast 00:07:16.437 ************************************ 00:07:16.437 13:35:09 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:16.437 13:35:09 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:16.437 13:35:09 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:16.437 13:35:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.437 ************************************ 00:07:16.437 START TEST accel_compare 00:07:16.437 ************************************ 00:07:16.437 13:35:09 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:16.437 [2024-06-11 13:35:09.078631] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:16.437 [2024-06-11 13:35:09.078722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432578 ] 00:07:16.437 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.437 [2024-06-11 13:35:09.157942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.437 [2024-06-11 13:35:09.254835] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.437 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:16.438 13:35:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.816 13:35:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:17.817 13:35:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.817 00:07:17.817 real 0m1.412s 00:07:17.817 user 0m1.276s 00:07:17.817 sys 0m0.147s 00:07:17.817 13:35:10 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:17.817 13:35:10 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:17.817 ************************************ 00:07:17.817 END TEST accel_compare 00:07:17.817 ************************************ 00:07:17.817 13:35:10 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:17.817 13:35:10 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:07:17.817 13:35:10 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:17.817 13:35:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.817 ************************************ 00:07:17.817 START TEST accel_xor 00:07:17.817 ************************************ 00:07:17.817 13:35:10 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:17.817 13:35:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:17.817 [2024-06-11 13:35:10.543903] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:17.817 [2024-06-11 13:35:10.543938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432815 ] 00:07:17.817 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.817 [2024-06-11 13:35:10.606676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.817 [2024-06-11 13:35:10.703129] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.076 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:18.077 13:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:19.012 13:35:11 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.012 00:07:19.012 real 0m1.384s 00:07:19.012 user 0m1.272s 00:07:19.012 sys 0m0.125s 00:07:19.012 13:35:11 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:19.012 13:35:11 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:19.012 ************************************ 00:07:19.012 END TEST accel_xor 00:07:19.012 ************************************ 00:07:19.272 13:35:11 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:19.272 13:35:11 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:19.272 13:35:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:19.272 13:35:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.272 ************************************ 00:07:19.272 START TEST accel_xor 00:07:19.272 ************************************ 00:07:19.272 13:35:11 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:07:19.272 13:35:11 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:19.272 13:35:11 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:19.272 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.272 13:35:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.272 13:35:11 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:19.272 13:35:11 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:19.272 13:35:11 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.272 13:35:11 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:19.272 13:35:11 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.272 13:35:11 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.273 13:35:11 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.273 13:35:11 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.273 13:35:11 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:19.273 13:35:11 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:19.273 [2024-06-11 13:35:11.997665] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:19.273 [2024-06-11 13:35:11.997730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433047 ] 00:07:19.273 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.273 [2024-06-11 13:35:12.073462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.273 [2024-06-11 13:35:12.170456] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.532 13:35:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:20.508 13:35:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.508 00:07:20.508 real 0m1.407s 00:07:20.508 user 0m1.273s 00:07:20.508 sys 0m0.147s 00:07:20.508 13:35:13 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:20.508 13:35:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:20.508 ************************************ 00:07:20.508 END TEST accel_xor 00:07:20.508 ************************************ 00:07:20.815 13:35:13 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:20.815 13:35:13 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:20.815 13:35:13 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:20.815 13:35:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.815 ************************************ 00:07:20.815 START TEST accel_dif_verify 00:07:20.815 ************************************ 00:07:20.815 13:35:13 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:20.815 [2024-06-11 13:35:13.469184] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:20.815 [2024-06-11 13:35:13.469263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433283 ] 00:07:20.815 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.815 [2024-06-11 13:35:13.545938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.815 [2024-06-11 13:35:13.641962] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.815 13:35:13 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:20.816 13:35:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:22.194 13:35:14 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.194 00:07:22.194 real 0m1.399s 00:07:22.194 user 0m1.277s 00:07:22.194 sys 0m0.134s 00:07:22.194 13:35:14 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:22.194 13:35:14 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:22.194 ************************************ 00:07:22.194 END TEST accel_dif_verify 00:07:22.194 ************************************ 00:07:22.194 13:35:14 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:22.194 13:35:14 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:22.194 13:35:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:22.194 13:35:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.194 ************************************ 00:07:22.194 START TEST accel_dif_generate 00:07:22.194 ************************************ 00:07:22.194 13:35:14 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:22.194 13:35:14 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:22.194 [2024-06-11 13:35:14.928224] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:22.194 [2024-06-11 13:35:14.928298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433524 ] 00:07:22.194 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.194 [2024-06-11 13:35:15.004616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.194 [2024-06-11 13:35:15.100912] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:22.454 13:35:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:23.833 13:35:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.833 00:07:23.833 real 0m1.401s 00:07:23.833 user 0m1.272s 00:07:23.833 sys 0m0.142s 00:07:23.833 13:35:16 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:23.833 13:35:16 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:23.833 ************************************ 00:07:23.833 END TEST accel_dif_generate 00:07:23.833 ************************************ 00:07:23.833 13:35:16 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:23.833 13:35:16 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:07:23.833 13:35:16 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:23.833 13:35:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.833 ************************************ 00:07:23.833 START TEST accel_dif_generate_copy 00:07:23.833 ************************************ 00:07:23.833 13:35:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:07:23.833 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:23.833 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:23.833 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.833 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.833 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:23.833 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:23.833 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:23.833 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:23.834 [2024-06-11 13:35:16.398783] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:23.834 [2024-06-11 13:35:16.398855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433753 ] 00:07:23.834 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.834 [2024-06-11 13:35:16.479620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.834 [2024-06-11 13:35:16.582607] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:23.834 13:35:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.213 00:07:25.213 real 0m1.421s 00:07:25.213 user 0m1.280s 00:07:25.213 sys 0m0.151s 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:25.213 13:35:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:25.213 ************************************ 00:07:25.213 END TEST accel_dif_generate_copy 00:07:25.213 ************************************ 00:07:25.213 13:35:17 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:25.213 13:35:17 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:25.213 13:35:17 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:07:25.213 13:35:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:25.213 13:35:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.213 ************************************ 00:07:25.213 START TEST accel_comp 00:07:25.213 ************************************ 00:07:25.213 13:35:17 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:25.213 13:35:17 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:25.213 13:35:17 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:25.213 13:35:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.213 13:35:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.213 13:35:17 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:25.213 13:35:17 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:25.213 13:35:17 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:25.213 13:35:17 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.213 13:35:17 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.213 13:35:17 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.214 13:35:17 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.214 13:35:17 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.214 13:35:17 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:25.214 13:35:17 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:25.214 [2024-06-11 13:35:17.885032] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:25.214 [2024-06-11 13:35:17.885095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433992 ] 00:07:25.214 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.214 [2024-06-11 13:35:17.965432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.214 [2024-06-11 13:35:18.062831] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.214 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:25.473 13:35:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:26.410 13:35:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.410 00:07:26.410 real 0m1.415s 00:07:26.410 user 0m1.284s 00:07:26.410 sys 0m0.143s 00:07:26.410 13:35:19 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:26.410 13:35:19 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:26.410 ************************************ 00:07:26.410 END TEST accel_comp 00:07:26.410 ************************************ 00:07:26.410 13:35:19 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:26.410 13:35:19 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:07:26.410 13:35:19 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:26.410 13:35:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.668 ************************************ 00:07:26.668 START TEST accel_decomp 00:07:26.668 ************************************ 00:07:26.668 13:35:19 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:26.668 13:35:19 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:26.668 13:35:19 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:26.668 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.669 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.669 13:35:19 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:26.669 13:35:19 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:26.669 13:35:19 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:26.669 13:35:19 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.669 13:35:19 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.669 13:35:19 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.669 13:35:19 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.669 13:35:19 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.669 13:35:19 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:26.669 13:35:19 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:26.669 [2024-06-11 13:35:19.369860] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:26.669 [2024-06-11 13:35:19.369932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434241 ] 00:07:26.669 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.669 [2024-06-11 13:35:19.450057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.669 [2024-06-11 13:35:19.549860] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:26.928 13:35:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.865 13:35:20 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.865 00:07:27.865 real 0m1.420s 00:07:27.865 user 0m1.284s 00:07:27.865 sys 0m0.149s 00:07:27.865 13:35:20 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:27.865 13:35:20 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 ************************************ 00:07:27.865 END TEST accel_decomp 00:07:27.865 ************************************ 00:07:28.124 13:35:20 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.124 13:35:20 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:28.124 13:35:20 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:28.124 13:35:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.124 ************************************ 00:07:28.124 START TEST accel_decomp_full 00:07:28.124 ************************************ 00:07:28.124 13:35:20 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:28.124 13:35:20 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:28.124 [2024-06-11 13:35:20.849795] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:28.124 [2024-06-11 13:35:20.849849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434569 ] 00:07:28.124 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.124 [2024-06-11 13:35:20.925666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.124 [2024-06-11 13:35:21.020419] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.384 13:35:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.321 13:35:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.321 13:35:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.321 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.321 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.581 13:35:22 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.581 00:07:29.581 real 0m1.404s 00:07:29.581 user 0m1.283s 00:07:29.581 sys 0m0.133s 00:07:29.581 13:35:22 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:29.581 13:35:22 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:29.581 ************************************ 00:07:29.581 END TEST accel_decomp_full 00:07:29.581 ************************************ 00:07:29.581 13:35:22 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:29.581 13:35:22 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:29.581 13:35:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:29.581 13:35:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.581 ************************************ 00:07:29.581 START TEST accel_decomp_mcore 00:07:29.581 ************************************ 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:29.581 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:29.581 [2024-06-11 13:35:22.325717] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:29.581 [2024-06-11 13:35:22.325782] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434873 ] 00:07:29.581 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.581 [2024-06-11 13:35:22.402741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.840 [2024-06-11 13:35:22.503565] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.840 [2024-06-11 13:35:22.503658] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.840 [2024-06-11 13:35:22.503763] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.840 [2024-06-11 13:35:22.503764] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.840 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:29.841 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:29.841 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:29.841 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:29.841 13:35:22 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.219 00:07:31.219 real 0m1.429s 00:07:31.219 user 0m4.693s 00:07:31.219 sys 0m0.153s 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:31.219 13:35:23 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:31.219 ************************************ 00:07:31.219 END TEST accel_decomp_mcore 00:07:31.219 ************************************ 00:07:31.220 13:35:23 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.220 13:35:23 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:31.220 13:35:23 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:31.220 13:35:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.220 ************************************ 00:07:31.220 START TEST accel_decomp_full_mcore 00:07:31.220 ************************************ 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:31.220 13:35:23 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:31.220 [2024-06-11 13:35:23.819932] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:31.220 [2024-06-11 13:35:23.820000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435133 ] 00:07:31.220 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.220 [2024-06-11 13:35:23.897925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.220 [2024-06-11 13:35:23.999155] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.220 [2024-06-11 13:35:23.999251] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.220 [2024-06-11 13:35:23.999293] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.220 [2024-06-11 13:35:23.999294] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.220 13:35:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.597 00:07:32.597 real 0m1.447s 00:07:32.597 user 0m4.753s 00:07:32.597 sys 0m0.159s 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:32.597 13:35:25 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:32.597 ************************************ 00:07:32.597 END TEST accel_decomp_full_mcore 00:07:32.597 ************************************ 00:07:32.597 13:35:25 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:32.597 13:35:25 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:07:32.597 13:35:25 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:32.597 13:35:25 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.597 ************************************ 00:07:32.597 START TEST accel_decomp_mthread 00:07:32.597 ************************************ 00:07:32.597 13:35:25 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:32.597 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:32.597 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:32.597 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.597 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.597 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:32.598 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:32.598 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:32.598 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.598 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.598 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.598 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.598 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.598 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:32.598 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:32.598 [2024-06-11 13:35:25.327273] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:32.598 [2024-06-11 13:35:25.327337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435365 ] 00:07:32.598 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.598 [2024-06-11 13:35:25.403693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.598 [2024-06-11 13:35:25.500579] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:32.857 13:35:25 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.235 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.236 00:07:34.236 real 0m1.415s 00:07:34.236 user 0m1.282s 00:07:34.236 sys 0m0.145s 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:34.236 13:35:26 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:34.236 ************************************ 00:07:34.236 END TEST accel_decomp_mthread 00:07:34.236 ************************************ 00:07:34.236 13:35:26 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.236 13:35:26 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:07:34.236 13:35:26 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:34.236 13:35:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.236 ************************************ 00:07:34.236 START TEST accel_decomp_full_mthread 00:07:34.236 ************************************ 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:34.236 13:35:26 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:34.236 [2024-06-11 13:35:26.793186] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:34.236 [2024-06-11 13:35:26.793258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435602 ] 00:07:34.236 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.236 [2024-06-11 13:35:26.869927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.236 [2024-06-11 13:35:26.967508] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.236 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.237 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.237 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.237 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.237 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.237 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.237 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.237 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.237 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.237 13:35:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.613 00:07:35.613 real 0m1.441s 00:07:35.613 user 0m1.319s 00:07:35.613 sys 0m0.135s 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:35.613 13:35:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:35.613 ************************************ 00:07:35.613 END TEST accel_decomp_full_mthread 00:07:35.613 ************************************ 00:07:35.613 13:35:28 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:35.613 13:35:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.613 13:35:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:35.613 13:35:28 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:07:35.613 13:35:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:35.613 13:35:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.613 13:35:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.613 13:35:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.613 13:35:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.613 13:35:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.613 13:35:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.613 13:35:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:35.613 13:35:28 accel -- accel/accel.sh@41 -- # jq -r . 00:07:35.613 ************************************ 00:07:35.613 START TEST accel_dif_functional_tests 00:07:35.613 ************************************ 00:07:35.613 13:35:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.613 [2024-06-11 13:35:28.297446] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:35.613 [2024-06-11 13:35:28.297511] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435839 ] 00:07:35.613 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.613 [2024-06-11 13:35:28.373474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.613 [2024-06-11 13:35:28.471222] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.614 [2024-06-11 13:35:28.471244] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.614 [2024-06-11 13:35:28.471250] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.873 00:07:35.873 00:07:35.873 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.873 http://cunit.sourceforge.net/ 00:07:35.873 00:07:35.873 00:07:35.873 Suite: accel_dif 00:07:35.873 Test: verify: DIF generated, GUARD check ...passed 00:07:35.873 Test: verify: DIF generated, APPTAG check ...passed 00:07:35.873 Test: verify: DIF generated, REFTAG check ...passed 00:07:35.873 Test: verify: DIF not generated, GUARD check ...[2024-06-11 13:35:28.556316] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:35.873 passed 00:07:35.873 Test: verify: DIF not generated, APPTAG check ...[2024-06-11 13:35:28.556376] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:35.873 passed 00:07:35.873 Test: verify: DIF not generated, REFTAG check ...[2024-06-11 13:35:28.556409] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:35.873 passed 00:07:35.873 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:35.873 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-11 13:35:28.556474] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:35.873 passed 00:07:35.873 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:35.873 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:35.873 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:35.873 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-11 13:35:28.556608] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:35.873 passed 00:07:35.873 Test: verify copy: DIF generated, GUARD check ...passed 00:07:35.873 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:35.873 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:35.873 Test: verify copy: DIF not generated, GUARD check ...[2024-06-11 13:35:28.556761] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:35.873 passed 00:07:35.873 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-11 13:35:28.556796] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:35.873 passed 00:07:35.873 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-11 13:35:28.556830] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:35.873 passed 00:07:35.873 Test: generate copy: DIF generated, GUARD check ...passed 00:07:35.873 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:35.873 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:35.873 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:35.873 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:35.873 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:35.873 Test: generate copy: iovecs-len validate ...[2024-06-11 13:35:28.557073] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:35.873 passed 00:07:35.873 Test: generate copy: buffer alignment validate ...passed 00:07:35.873 00:07:35.873 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.873 suites 1 1 n/a 0 0 00:07:35.873 tests 26 26 26 0 0 00:07:35.873 asserts 115 115 115 0 n/a 00:07:35.873 00:07:35.873 Elapsed time = 0.003 seconds 00:07:35.873 00:07:35.873 real 0m0.483s 00:07:35.873 user 0m0.753s 00:07:35.873 sys 0m0.164s 00:07:35.873 13:35:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:35.873 13:35:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:35.873 ************************************ 00:07:35.873 END TEST accel_dif_functional_tests 00:07:35.873 ************************************ 00:07:36.132 00:07:36.132 real 0m32.663s 00:07:36.132 user 0m36.061s 00:07:36.132 sys 0m4.886s 00:07:36.132 13:35:28 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:36.132 13:35:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.132 ************************************ 00:07:36.132 END TEST accel 00:07:36.132 ************************************ 00:07:36.132 13:35:28 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:36.132 13:35:28 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:36.132 13:35:28 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.132 13:35:28 -- common/autotest_common.sh@10 -- # set +x 00:07:36.132 ************************************ 00:07:36.132 START TEST accel_rpc 00:07:36.132 ************************************ 00:07:36.132 13:35:28 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:36.132 * Looking for test storage... 00:07:36.132 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:36.132 13:35:28 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:36.132 13:35:28 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3435962 00:07:36.132 13:35:28 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3435962 00:07:36.132 13:35:28 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:36.132 13:35:28 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 3435962 ']' 00:07:36.132 13:35:28 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.132 13:35:28 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:36.132 13:35:28 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.132 13:35:28 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:36.132 13:35:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.132 [2024-06-11 13:35:28.965528] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:36.132 [2024-06-11 13:35:28.965597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435962 ] 00:07:36.132 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.132 [2024-06-11 13:35:29.042855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.391 [2024-06-11 13:35:29.147036] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.391 13:35:29 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:36.391 13:35:29 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:07:36.391 13:35:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:36.391 13:35:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:36.391 13:35:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:36.391 13:35:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:36.391 13:35:29 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:36.391 13:35:29 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:36.391 13:35:29 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:36.391 13:35:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.391 ************************************ 00:07:36.391 START TEST accel_assign_opcode 00:07:36.391 ************************************ 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.391 [2024-06-11 13:35:29.243682] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.391 [2024-06-11 13:35:29.251688] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.391 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.651 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.651 13:35:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:36.651 13:35:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:36.651 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:36.651 13:35:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:36.651 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.651 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:36.651 software 00:07:36.651 00:07:36.651 real 0m0.281s 00:07:36.651 user 0m0.050s 00:07:36.651 sys 0m0.012s 00:07:36.651 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:36.651 13:35:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:36.651 ************************************ 00:07:36.651 END TEST accel_assign_opcode 00:07:36.651 ************************************ 00:07:36.651 13:35:29 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3435962 00:07:36.651 13:35:29 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 3435962 ']' 00:07:36.651 13:35:29 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 3435962 00:07:36.651 13:35:29 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:07:36.651 13:35:29 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:36.651 13:35:29 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3435962 00:07:36.910 13:35:29 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:36.910 13:35:29 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:36.910 13:35:29 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3435962' 00:07:36.910 killing process with pid 3435962 00:07:36.910 13:35:29 accel_rpc -- common/autotest_common.sh@968 -- # kill 3435962 00:07:36.910 13:35:29 accel_rpc -- common/autotest_common.sh@973 -- # wait 3435962 00:07:37.170 00:07:37.170 real 0m1.104s 00:07:37.170 user 0m1.078s 00:07:37.170 sys 0m0.436s 00:07:37.170 13:35:29 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:37.170 13:35:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.170 ************************************ 00:07:37.170 END TEST accel_rpc 00:07:37.170 ************************************ 00:07:37.170 13:35:29 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:37.170 13:35:29 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:37.170 13:35:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:37.170 13:35:29 -- common/autotest_common.sh@10 -- # set +x 00:07:37.170 ************************************ 00:07:37.170 START TEST app_cmdline 00:07:37.170 ************************************ 00:07:37.170 13:35:30 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:37.428 * Looking for test storage... 00:07:37.428 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:37.428 13:35:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:37.428 13:35:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3436194 00:07:37.428 13:35:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3436194 00:07:37.429 13:35:30 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:37.429 13:35:30 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 3436194 ']' 00:07:37.429 13:35:30 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.429 13:35:30 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:37.429 13:35:30 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.429 13:35:30 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:37.429 13:35:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.429 [2024-06-11 13:35:30.139507] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:37.429 [2024-06-11 13:35:30.139586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3436194 ] 00:07:37.429 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.429 [2024-06-11 13:35:30.220132] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.429 [2024-06-11 13:35:30.318208] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.687 13:35:30 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:37.687 13:35:30 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:07:37.687 13:35:30 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:37.946 { 00:07:37.946 "version": "SPDK v24.09-pre git sha1 9ccef4907", 00:07:37.946 "fields": { 00:07:37.946 "major": 24, 00:07:37.946 "minor": 9, 00:07:37.946 "patch": 0, 00:07:37.946 "suffix": "-pre", 00:07:37.946 "commit": "9ccef4907" 00:07:37.946 } 00:07:37.946 } 00:07:37.946 13:35:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:37.946 13:35:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:37.946 13:35:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:37.946 13:35:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:37.946 13:35:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:37.946 13:35:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:37.946 13:35:30 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:37.946 13:35:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:37.946 13:35:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.946 13:35:30 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:38.204 13:35:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:38.204 13:35:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:38.204 13:35:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.204 13:35:30 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:07:38.204 13:35:30 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.204 13:35:30 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:38.204 13:35:30 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:38.204 13:35:30 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:38.205 13:35:30 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:38.205 13:35:30 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:38.205 13:35:30 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:38.205 13:35:30 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:38.205 13:35:30 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:38.205 13:35:30 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.205 request: 00:07:38.205 { 00:07:38.205 "method": "env_dpdk_get_mem_stats", 00:07:38.205 "req_id": 1 00:07:38.205 } 00:07:38.205 Got JSON-RPC error response 00:07:38.205 response: 00:07:38.205 { 00:07:38.205 "code": -32601, 00:07:38.205 "message": "Method not found" 00:07:38.205 } 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:38.463 13:35:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3436194 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 3436194 ']' 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 3436194 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 3436194 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 3436194' 00:07:38.463 killing process with pid 3436194 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@968 -- # kill 3436194 00:07:38.463 13:35:31 app_cmdline -- common/autotest_common.sh@973 -- # wait 3436194 00:07:38.721 00:07:38.721 real 0m1.497s 00:07:38.721 user 0m1.861s 00:07:38.721 sys 0m0.476s 00:07:38.721 13:35:31 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:38.721 13:35:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.721 ************************************ 00:07:38.721 END TEST app_cmdline 00:07:38.721 ************************************ 00:07:38.721 13:35:31 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:38.721 13:35:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:38.721 13:35:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:38.721 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:07:38.721 ************************************ 00:07:38.721 START TEST version 00:07:38.721 ************************************ 00:07:38.721 13:35:31 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:38.979 * Looking for test storage... 00:07:38.979 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:38.979 13:35:31 version -- app/version.sh@17 -- # get_header_version major 00:07:38.979 13:35:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:38.979 13:35:31 version -- app/version.sh@14 -- # cut -f2 00:07:38.979 13:35:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.979 13:35:31 version -- app/version.sh@17 -- # major=24 00:07:38.979 13:35:31 version -- app/version.sh@18 -- # get_header_version minor 00:07:38.979 13:35:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:38.979 13:35:31 version -- app/version.sh@14 -- # cut -f2 00:07:38.979 13:35:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.979 13:35:31 version -- app/version.sh@18 -- # minor=9 00:07:38.979 13:35:31 version -- app/version.sh@19 -- # get_header_version patch 00:07:38.979 13:35:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:38.979 13:35:31 version -- app/version.sh@14 -- # cut -f2 00:07:38.979 13:35:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.979 13:35:31 version -- app/version.sh@19 -- # patch=0 00:07:38.979 13:35:31 version -- app/version.sh@20 -- # get_header_version suffix 00:07:38.979 13:35:31 version -- app/version.sh@14 -- # cut -f2 00:07:38.979 13:35:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:38.979 13:35:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:38.979 13:35:31 version -- app/version.sh@20 -- # suffix=-pre 00:07:38.979 13:35:31 version -- app/version.sh@22 -- # version=24.9 00:07:38.979 13:35:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:38.979 13:35:31 version -- app/version.sh@28 -- # version=24.9rc0 00:07:38.979 13:35:31 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:38.979 13:35:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:38.979 13:35:31 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:38.979 13:35:31 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:38.979 00:07:38.979 real 0m0.171s 00:07:38.979 user 0m0.090s 00:07:38.979 sys 0m0.118s 00:07:38.979 13:35:31 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:38.979 13:35:31 version -- common/autotest_common.sh@10 -- # set +x 00:07:38.979 ************************************ 00:07:38.979 END TEST version 00:07:38.979 ************************************ 00:07:38.979 13:35:31 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@198 -- # uname -s 00:07:38.979 13:35:31 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:38.979 13:35:31 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:38.979 13:35:31 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:38.979 13:35:31 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:38.979 13:35:31 -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:38.979 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:07:38.979 13:35:31 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:07:38.979 13:35:31 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:07:38.979 13:35:31 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:07:38.979 13:35:31 -- spdk/autotest.sh@371 -- # [[ 1 -eq 1 ]] 00:07:38.979 13:35:31 -- spdk/autotest.sh@372 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:38.979 13:35:31 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:38.979 13:35:31 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:38.979 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:07:38.979 ************************************ 00:07:38.979 START TEST llvm_fuzz 00:07:38.979 ************************************ 00:07:38.979 13:35:31 llvm_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:39.239 * Looking for test storage... 00:07:39.239 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:39.239 13:35:31 llvm_fuzz -- common/autotest_common.sh@547 -- # fuzzers=() 00:07:39.239 13:35:31 llvm_fuzz -- common/autotest_common.sh@547 -- # local fuzzers 00:07:39.239 13:35:31 llvm_fuzz -- common/autotest_common.sh@549 -- # [[ -n '' ]] 00:07:39.239 13:35:31 llvm_fuzz -- common/autotest_common.sh@552 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:39.239 13:35:31 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:39.239 13:35:31 llvm_fuzz -- common/autotest_common.sh@556 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:07:39.239 13:35:31 llvm_fuzz -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:39.239 13:35:31 llvm_fuzz -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:07:39.239 13:35:31 llvm_fuzz -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:39.239 13:35:31 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:07:39.239 ************************************ 00:07:39.239 START TEST nvmf_fuzz 00:07:39.239 ************************************ 00:07:39.239 13:35:31 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:39.239 * Looking for test storage... 00:07:39.239 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@34 -- # set -e 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:39.239 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:39.240 #define SPDK_CONFIG_H 00:07:39.240 #define SPDK_CONFIG_APPS 1 00:07:39.240 #define SPDK_CONFIG_ARCH native 00:07:39.240 #undef SPDK_CONFIG_ASAN 00:07:39.240 #undef SPDK_CONFIG_AVAHI 00:07:39.240 #undef SPDK_CONFIG_CET 00:07:39.240 #define SPDK_CONFIG_COVERAGE 1 00:07:39.240 #define SPDK_CONFIG_CROSS_PREFIX 00:07:39.240 #undef SPDK_CONFIG_CRYPTO 00:07:39.240 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:39.240 #undef SPDK_CONFIG_CUSTOMOCF 00:07:39.240 #undef SPDK_CONFIG_DAOS 00:07:39.240 #define SPDK_CONFIG_DAOS_DIR 00:07:39.240 #define SPDK_CONFIG_DEBUG 1 00:07:39.240 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:39.240 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:39.240 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:39.240 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:39.240 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:39.240 #undef SPDK_CONFIG_DPDK_UADK 00:07:39.240 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:39.240 #define SPDK_CONFIG_EXAMPLES 1 00:07:39.240 #undef SPDK_CONFIG_FC 00:07:39.240 #define SPDK_CONFIG_FC_PATH 00:07:39.240 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:39.240 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:39.240 #undef SPDK_CONFIG_FUSE 00:07:39.240 #define SPDK_CONFIG_FUZZER 1 00:07:39.240 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:07:39.240 #undef SPDK_CONFIG_GOLANG 00:07:39.240 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:39.240 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:39.240 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:39.240 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:39.240 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:39.240 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:39.240 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:39.240 #define SPDK_CONFIG_IDXD 1 00:07:39.240 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:39.240 #undef SPDK_CONFIG_IPSEC_MB 00:07:39.240 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:39.240 #define SPDK_CONFIG_ISAL 1 00:07:39.240 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:39.240 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:39.240 #define SPDK_CONFIG_LIBDIR 00:07:39.240 #undef SPDK_CONFIG_LTO 00:07:39.240 #define SPDK_CONFIG_MAX_LCORES 00:07:39.240 #define SPDK_CONFIG_NVME_CUSE 1 00:07:39.240 #undef SPDK_CONFIG_OCF 00:07:39.240 #define SPDK_CONFIG_OCF_PATH 00:07:39.240 #define SPDK_CONFIG_OPENSSL_PATH 00:07:39.240 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:39.240 #define SPDK_CONFIG_PGO_DIR 00:07:39.240 #undef SPDK_CONFIG_PGO_USE 00:07:39.240 #define SPDK_CONFIG_PREFIX /usr/local 00:07:39.240 #undef SPDK_CONFIG_RAID5F 00:07:39.240 #undef SPDK_CONFIG_RBD 00:07:39.240 #define SPDK_CONFIG_RDMA 1 00:07:39.240 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:39.240 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:39.240 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:39.240 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:39.240 #undef SPDK_CONFIG_SHARED 00:07:39.240 #undef SPDK_CONFIG_SMA 00:07:39.240 #define SPDK_CONFIG_TESTS 1 00:07:39.240 #undef SPDK_CONFIG_TSAN 00:07:39.240 #define SPDK_CONFIG_UBLK 1 00:07:39.240 #define SPDK_CONFIG_UBSAN 1 00:07:39.240 #undef SPDK_CONFIG_UNIT_TESTS 00:07:39.240 #undef SPDK_CONFIG_URING 00:07:39.240 #define SPDK_CONFIG_URING_PATH 00:07:39.240 #undef SPDK_CONFIG_URING_ZNS 00:07:39.240 #undef SPDK_CONFIG_USDT 00:07:39.240 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:39.240 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:39.240 #define SPDK_CONFIG_VFIO_USER 1 00:07:39.240 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:39.240 #define SPDK_CONFIG_VHOST 1 00:07:39.240 #define SPDK_CONFIG_VIRTIO 1 00:07:39.240 #undef SPDK_CONFIG_VTUNE 00:07:39.240 #define SPDK_CONFIG_VTUNE_DIR 00:07:39.240 #define SPDK_CONFIG_WERROR 1 00:07:39.240 #define SPDK_CONFIG_WPDK_DIR 00:07:39.240 #undef SPDK_CONFIG_XNVME 00:07:39.240 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:39.240 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # uname -s 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@68 -- # PM_OS=Linux 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[0]= 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@58 -- # : 1 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@62 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@64 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@66 -- # : 1 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@68 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@70 -- # : 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@72 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@74 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@76 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@78 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@80 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@82 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@84 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@86 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@88 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@90 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@92 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@94 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@96 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@98 -- # : 1 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@100 -- # : 1 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@104 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@106 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@108 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@110 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@112 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@114 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@116 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@118 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@120 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@122 -- # : 1 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@124 -- # : 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@126 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@128 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@130 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@132 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@134 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@136 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@138 -- # : 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@140 -- # : true 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@142 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@144 -- # : 0 00:07:39.241 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@146 -- # : 0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@148 -- # : 0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@150 -- # : 0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@152 -- # : 0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@154 -- # : 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@156 -- # : 0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@158 -- # : 0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@160 -- # : 0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@162 -- # : 0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@164 -- # : 0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@167 -- # : 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@169 -- # : 0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@171 -- # : 0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@200 -- # cat 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j88 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:39.501 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # [[ -z 3436767 ]] 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@318 -- # kill -0 3436767 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.MEelyT 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.MEelyT/tests/nvmf /tmp/spdk.MEelyT 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # df -T 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=1050284032 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4234145792 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=82736009216 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94507954176 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=11771944960 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47249264640 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47253975040 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895683584 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901594112 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5910528 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253217280 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47253979136 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=761856 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450790912 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450795008 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:39.502 * Looking for test storage... 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@374 -- # target_space=82736009216 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@381 -- # new_size=13986537472 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:39.502 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@389 -- # return 0 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1681 -- # set -o errtrace 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1686 -- # true 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1688 -- # xtrace_fd 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@27 -- # exec 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@29 -- # exec 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@18 -- # set -x 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@8 -- # pids=() 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@70 -- # local time=1 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:39.502 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:39.503 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:07:39.503 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4400 00:07:39.503 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:39.503 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:39.503 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:39.503 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:39.503 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:39.503 13:35:32 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:07:39.503 [2024-06-11 13:35:32.265457] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:39.503 [2024-06-11 13:35:32.265553] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3436807 ] 00:07:39.503 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.761 [2024-06-11 13:35:32.468861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.761 [2024-06-11 13:35:32.552789] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.761 [2024-06-11 13:35:32.616818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:39.761 [2024-06-11 13:35:32.633185] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:39.761 INFO: Running with entropic power schedule (0xFF, 100). 00:07:39.761 INFO: Seed: 2570564110 00:07:39.761 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:07:39.761 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:07:39.761 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:39.761 INFO: A corpus is not provided, starting from an empty corpus 00:07:39.761 #2 INITED exec/s: 0 rss: 63Mb 00:07:39.761 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:39.761 This may also happen if the target rejected all inputs we tried so far 00:07:40.019 [2024-06-11 13:35:32.688843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:4 nsid:49494949 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.019 [2024-06-11 13:35:32.688879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.019 NEW_FUNC[1/687]: 0x482e80 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:40.019 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:40.019 #3 NEW cov: 11828 ft: 11828 corp: 2/76b lim: 320 exec/s: 0 rss: 70Mb L: 75/75 MS: 1 InsertRepeatedBytes- 00:07:40.019 [2024-06-11 13:35:32.899475] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:40.019 [2024-06-11 13:35:32.899527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.277 #4 NEW cov: 11976 ft: 12775 corp: 3/142b lim: 320 exec/s: 0 rss: 70Mb L: 66/75 MS: 1 InsertRepeatedBytes- 00:07:40.277 [2024-06-11 13:35:32.959451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:4 nsid:49494949 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.277 [2024-06-11 13:35:32.959486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.277 #5 NEW cov: 11982 ft: 13004 corp: 4/217b lim: 320 exec/s: 0 rss: 70Mb L: 75/75 MS: 1 ShuffleBytes- 00:07:40.277 [2024-06-11 13:35:33.029761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:4 nsid:49494949 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.278 [2024-06-11 13:35:33.029795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.278 [2024-06-11 13:35:33.029872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.278 [2024-06-11 13:35:33.029889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.278 NEW_FUNC[1/1]: 0x1386840 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2042 00:07:40.278 #6 NEW cov: 12098 ft: 13489 corp: 5/356b lim: 320 exec/s: 0 rss: 70Mb L: 139/139 MS: 1 InsertRepeatedBytes- 00:07:40.278 [2024-06-11 13:35:33.109877] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:40.278 [2024-06-11 13:35:33.109910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.278 #12 NEW cov: 12098 ft: 13551 corp: 6/423b lim: 320 exec/s: 0 rss: 70Mb L: 67/139 MS: 1 InsertByte- 00:07:40.278 [2024-06-11 13:35:33.180071] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:40.278 [2024-06-11 13:35:33.180105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.536 #13 NEW cov: 12098 ft: 13605 corp: 7/490b lim: 320 exec/s: 0 rss: 71Mb L: 67/139 MS: 1 ShuffleBytes- 00:07:40.536 [2024-06-11 13:35:33.250549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:4 nsid:49494949 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.536 [2024-06-11 13:35:33.250582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.536 [2024-06-11 13:35:33.250661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:49494949 cdw11:49494949 SGL TRANSPORT DATA BLOCK TRANSPORT 0x4949494949494949 00:07:40.536 [2024-06-11 13:35:33.250683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.536 [2024-06-11 13:35:33.250759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.536 [2024-06-11 13:35:33.250775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.536 #14 NEW cov: 12098 ft: 13842 corp: 8/693b lim: 320 exec/s: 0 rss: 71Mb L: 203/203 MS: 1 CopyPart- 00:07:40.536 [2024-06-11 13:35:33.330642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:4 nsid:49494949 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.536 [2024-06-11 13:35:33.330674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.536 [2024-06-11 13:35:33.330751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.536 [2024-06-11 13:35:33.330767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.536 #15 NEW cov: 12098 ft: 13895 corp: 9/832b lim: 320 exec/s: 0 rss: 71Mb L: 139/203 MS: 1 ShuffleBytes- 00:07:40.536 [2024-06-11 13:35:33.380598] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:40.536 [2024-06-11 13:35:33.380631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.536 #16 NEW cov: 12098 ft: 13942 corp: 10/899b lim: 320 exec/s: 0 rss: 71Mb L: 67/203 MS: 1 ChangeBit- 00:07:40.794 [2024-06-11 13:35:33.450865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:40.794 [2024-06-11 13:35:33.450897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.794 #17 NEW cov: 12098 ft: 13975 corp: 11/965b lim: 320 exec/s: 0 rss: 71Mb L: 66/203 MS: 1 ShuffleBytes- 00:07:40.794 [2024-06-11 13:35:33.500968] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:40.794 [2024-06-11 13:35:33.501000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.794 #18 NEW cov: 12098 ft: 14011 corp: 12/1033b lim: 320 exec/s: 0 rss: 71Mb L: 68/203 MS: 1 InsertByte- 00:07:40.795 [2024-06-11 13:35:33.551105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:4 nsid:49494949 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.795 [2024-06-11 13:35:33.551138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.795 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:40.795 #19 NEW cov: 12121 ft: 14100 corp: 13/1108b lim: 320 exec/s: 0 rss: 71Mb L: 75/203 MS: 1 ChangeBinInt- 00:07:40.795 [2024-06-11 13:35:33.601347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.795 [2024-06-11 13:35:33.601379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.795 [2024-06-11 13:35:33.601453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:40.795 [2024-06-11 13:35:33.601470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.795 #20 NEW cov: 12121 ft: 14180 corp: 14/1293b lim: 320 exec/s: 0 rss: 71Mb L: 185/203 MS: 1 InsertRepeatedBytes- 00:07:40.795 [2024-06-11 13:35:33.681518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:40.795 [2024-06-11 13:35:33.681551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.053 #21 NEW cov: 12121 ft: 14199 corp: 15/1361b lim: 320 exec/s: 21 rss: 71Mb L: 68/203 MS: 1 InsertByte- 00:07:41.053 [2024-06-11 13:35:33.731598] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:41.053 [2024-06-11 13:35:33.731630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.053 #22 NEW cov: 12121 ft: 14208 corp: 16/1429b lim: 320 exec/s: 22 rss: 71Mb L: 68/203 MS: 1 ChangeBinInt- 00:07:41.053 [2024-06-11 13:35:33.802095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:4 nsid:49494949 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.053 [2024-06-11 13:35:33.802128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.053 [2024-06-11 13:35:33.802205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:5 nsid:49494949 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.053 [2024-06-11 13:35:33.802223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.053 #23 NEW cov: 12121 ft: 14678 corp: 17/1568b lim: 320 exec/s: 23 rss: 72Mb L: 139/203 MS: 1 CrossOver- 00:07:41.053 [2024-06-11 13:35:33.882132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2fa22f2f2f2f0a 00:07:41.053 [2024-06-11 13:35:33.882164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.053 #35 NEW cov: 12121 ft: 14713 corp: 18/1639b lim: 320 exec/s: 35 rss: 72Mb L: 71/203 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:41.053 [2024-06-11 13:35:33.932182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:41.054 [2024-06-11 13:35:33.932219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.054 #36 NEW cov: 12121 ft: 14724 corp: 19/1706b lim: 320 exec/s: 36 rss: 72Mb L: 67/203 MS: 1 CMP- DE: "7\252\247v\302\345\003\000"- 00:07:41.312 [2024-06-11 13:35:33.982471] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.312 [2024-06-11 13:35:33.982503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.312 [2024-06-11 13:35:33.982578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.312 [2024-06-11 13:35:33.982594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.312 #37 NEW cov: 12121 ft: 14748 corp: 20/1886b lim: 320 exec/s: 37 rss: 72Mb L: 180/203 MS: 1 InsertRepeatedBytes- 00:07:41.312 [2024-06-11 13:35:34.032564] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:41.312 [2024-06-11 13:35:34.032596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.312 #38 NEW cov: 12121 ft: 14779 corp: 21/1953b lim: 320 exec/s: 38 rss: 72Mb L: 67/203 MS: 1 ChangeByte- 00:07:41.312 [2024-06-11 13:35:34.102493] ctrlr.c:1884:nvmf_ctrlr_get_features_reservation_persistence: *ERROR*: Get Features - Invalid Namespace ID 00:07:41.312 [2024-06-11 13:35:34.102877] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST RESERVE PERSIST cid:4 cdw10:83838383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x8383838383838383 00:07:41.313 [2024-06-11 13:35:34.102911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.313 [2024-06-11 13:35:34.102989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (83) qid:0 cid:5 nsid:83838383 cdw10:83838383 cdw11:83838383 SGL TRANSPORT DATA BLOCK TRANSPORT 0x8383838383838383 00:07:41.313 [2024-06-11 13:35:34.103007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.313 NEW_FUNC[1/1]: 0x11db5d0 in nvmf_ctrlr_get_features_reservation_persistence /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1873 00:07:41.313 #39 NEW cov: 12145 ft: 14817 corp: 22/2115b lim: 320 exec/s: 39 rss: 72Mb L: 162/203 MS: 1 InsertRepeatedBytes- 00:07:41.313 [2024-06-11 13:35:34.162870] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:41.313 [2024-06-11 13:35:34.162903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.313 #40 NEW cov: 12145 ft: 14844 corp: 23/2183b lim: 320 exec/s: 40 rss: 72Mb L: 68/203 MS: 1 InsertByte- 00:07:41.571 [2024-06-11 13:35:34.233358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:4 nsid:49494949 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.571 [2024-06-11 13:35:34.233392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.571 [2024-06-11 13:35:34.233469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:5 nsid:ffff4949 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.571 [2024-06-11 13:35:34.233486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.571 #46 NEW cov: 12145 ft: 14857 corp: 24/2330b lim: 320 exec/s: 46 rss: 72Mb L: 147/203 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:07:41.571 [2024-06-11 13:35:34.283229] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2b2f2f2f2f2f 00:07:41.571 [2024-06-11 13:35:34.283264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.571 #52 NEW cov: 12145 ft: 14879 corp: 25/2398b lim: 320 exec/s: 52 rss: 72Mb L: 68/203 MS: 1 InsertByte- 00:07:41.571 [2024-06-11 13:35:34.333658] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0a494949 SGL TRANSPORT DATA BLOCK TRANSPORT 0x4949494949494949 00:07:41.571 [2024-06-11 13:35:34.333692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.571 [2024-06-11 13:35:34.333772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (32) qid:0 cid:5 nsid:49494949 cdw10:00000000 cdw11:49494949 SGL TRANSPORT DATA BLOCK TRANSPORT 0x4949494949494949 00:07:41.571 [2024-06-11 13:35:34.333789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.571 [2024-06-11 13:35:34.333865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.571 [2024-06-11 13:35:34.333881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.571 #55 NEW cov: 12145 ft: 14900 corp: 26/2594b lim: 320 exec/s: 55 rss: 72Mb L: 196/203 MS: 3 EraseBytes-InsertByte-CrossOver- 00:07:41.571 [2024-06-11 13:35:34.413602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:41.571 [2024-06-11 13:35:34.413639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.571 #56 NEW cov: 12145 ft: 14903 corp: 27/2662b lim: 320 exec/s: 56 rss: 72Mb L: 68/203 MS: 1 CMP- DE: "\376\377\377\377\000\000\000\000"- 00:07:41.831 [2024-06-11 13:35:34.484078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:4 nsid:49494949 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.831 [2024-06-11 13:35:34.484112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.831 [2024-06-11 13:35:34.484193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (49) qid:0 cid:5 nsid:49494949 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.831 [2024-06-11 13:35:34.484217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.831 #57 NEW cov: 12145 ft: 14910 corp: 28/2813b lim: 320 exec/s: 57 rss: 72Mb L: 151/203 MS: 1 CopyPart- 00:07:41.831 [2024-06-11 13:35:34.564030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2f2f2f2f2f2f2f2f 00:07:41.831 [2024-06-11 13:35:34.564065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.831 #58 NEW cov: 12145 ft: 14916 corp: 29/2929b lim: 320 exec/s: 58 rss: 72Mb L: 116/203 MS: 1 CopyPart- 00:07:41.831 [2024-06-11 13:35:34.614162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:2f2f2f2f SGL TRANSPORT DATA BLOCK TRANSPORT 0x2b2f2f2f2f2f2f2f 00:07:41.831 [2024-06-11 13:35:34.614194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.831 #59 NEW cov: 12145 ft: 14938 corp: 30/2995b lim: 320 exec/s: 59 rss: 72Mb L: 66/203 MS: 1 ChangeBit- 00:07:41.831 [2024-06-11 13:35:34.664371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.831 [2024-06-11 13:35:34.664402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.831 [2024-06-11 13:35:34.664478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:41.831 [2024-06-11 13:35:34.664495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.831 #60 NEW cov: 12145 ft: 14971 corp: 31/3180b lim: 320 exec/s: 30 rss: 72Mb L: 185/203 MS: 1 CopyPart- 00:07:41.831 #60 DONE cov: 12145 ft: 14971 corp: 31/3180b lim: 320 exec/s: 30 rss: 72Mb 00:07:41.831 ###### Recommended dictionary. ###### 00:07:41.831 "7\252\247v\302\345\003\000" # Uses: 0 00:07:41.831 "\000\000\000\000\000\000\000\000" # Uses: 1 00:07:41.831 "\376\377\377\377\000\000\000\000" # Uses: 0 00:07:41.831 ###### End of recommended dictionary. ###### 00:07:41.831 Done 60 runs in 2 second(s) 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4401 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:42.090 13:35:34 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:07:42.090 [2024-06-11 13:35:34.914049] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:42.090 [2024-06-11 13:35:34.914127] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437245 ] 00:07:42.090 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.349 [2024-06-11 13:35:35.123364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.349 [2024-06-11 13:35:35.206815] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.607 [2024-06-11 13:35:35.270683] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.607 [2024-06-11 13:35:35.287052] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:42.607 INFO: Running with entropic power schedule (0xFF, 100). 00:07:42.607 INFO: Seed: 930600160 00:07:42.607 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:07:42.607 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:07:42.607 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:42.607 INFO: A corpus is not provided, starting from an empty corpus 00:07:42.607 #2 INITED exec/s: 0 rss: 65Mb 00:07:42.607 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:42.607 This may also happen if the target rejected all inputs we tried so far 00:07:42.607 [2024-06-11 13:35:35.332364] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90116) > buf size (4096) 00:07:42.607 [2024-06-11 13:35:35.332639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.607 [2024-06-11 13:35:35.332668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.607 NEW_FUNC[1/687]: 0x483780 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:42.607 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:42.607 #3 NEW cov: 11911 ft: 11907 corp: 2/10b lim: 30 exec/s: 0 rss: 71Mb L: 9/9 MS: 1 CMP- DE: "X\000\000\000\000\000\000\000"- 00:07:42.865 [2024-06-11 13:35:35.522738] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90116) > buf size (4096) 00:07:42.865 [2024-06-11 13:35:35.523005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.865 [2024-06-11 13:35:35.523036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.865 #10 NEW cov: 12041 ft: 12530 corp: 3/19b lim: 30 exec/s: 0 rss: 72Mb L: 9/9 MS: 2 CopyPart-PersAutoDict- DE: "X\000\000\000\000\000\000\000"- 00:07:42.865 [2024-06-11 13:35:35.562823] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.865 [2024-06-11 13:35:35.562972] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.865 [2024-06-11 13:35:35.563113] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.865 [2024-06-11 13:35:35.563374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:08ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.865 [2024-06-11 13:35:35.563399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.865 [2024-06-11 13:35:35.563459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.865 [2024-06-11 13:35:35.563470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.865 [2024-06-11 13:35:35.563525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.865 [2024-06-11 13:35:35.563538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.865 #15 NEW cov: 12053 ft: 13158 corp: 4/41b lim: 30 exec/s: 0 rss: 72Mb L: 22/22 MS: 5 ChangeBit-ChangeBit-CopyPart-ChangeBit-InsertRepeatedBytes- 00:07:42.865 [2024-06-11 13:35:35.602906] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.865 [2024-06-11 13:35:35.603048] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.865 [2024-06-11 13:35:35.603299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.865 [2024-06-11 13:35:35.603322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.865 [2024-06-11 13:35:35.603380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.865 [2024-06-11 13:35:35.603392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.865 #16 NEW cov: 12138 ft: 13588 corp: 5/55b lim: 30 exec/s: 0 rss: 72Mb L: 14/22 MS: 1 InsertRepeatedBytes- 00:07:42.865 [2024-06-11 13:35:35.642995] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:42.865 [2024-06-11 13:35:35.643137] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:07:42.865 [2024-06-11 13:35:35.643387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.865 [2024-06-11 13:35:35.643411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.866 [2024-06-11 13:35:35.643469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.866 [2024-06-11 13:35:35.643482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.866 #17 NEW cov: 12138 ft: 13699 corp: 6/69b lim: 30 exec/s: 0 rss: 72Mb L: 14/22 MS: 1 PersAutoDict- DE: "X\000\000\000\000\000\000\000"- 00:07:42.866 [2024-06-11 13:35:35.693168] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:42.866 [2024-06-11 13:35:35.693320] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xff 00:07:42.866 [2024-06-11 13:35:35.693569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.866 [2024-06-11 13:35:35.693596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.866 [2024-06-11 13:35:35.693654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.866 [2024-06-11 13:35:35.693666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.866 #18 NEW cov: 12138 ft: 13832 corp: 7/84b lim: 30 exec/s: 0 rss: 72Mb L: 15/22 MS: 1 InsertByte- 00:07:42.866 [2024-06-11 13:35:35.743283] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:42.866 [2024-06-11 13:35:35.743424] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000aff 00:07:42.866 [2024-06-11 13:35:35.743669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.866 [2024-06-11 13:35:35.743693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.866 [2024-06-11 13:35:35.743748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.866 [2024-06-11 13:35:35.743760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.866 #19 NEW cov: 12138 ft: 13932 corp: 8/96b lim: 30 exec/s: 0 rss: 72Mb L: 12/22 MS: 1 CrossOver- 00:07:43.124 [2024-06-11 13:35:35.783330] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x27 00:07:43.124 [2024-06-11 13:35:35.783597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.124 [2024-06-11 13:35:35.783620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.124 #20 NEW cov: 12138 ft: 14006 corp: 9/105b lim: 30 exec/s: 0 rss: 72Mb L: 9/22 MS: 1 ChangeByte- 00:07:43.124 [2024-06-11 13:35:35.833529] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.124 [2024-06-11 13:35:35.833782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.124 [2024-06-11 13:35:35.833805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.124 #21 NEW cov: 12138 ft: 14026 corp: 10/114b lim: 30 exec/s: 0 rss: 72Mb L: 9/22 MS: 1 EraseBytes- 00:07:43.124 [2024-06-11 13:35:35.873668] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:43.124 [2024-06-11 13:35:35.873813] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:07:43.124 [2024-06-11 13:35:35.874062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.124 [2024-06-11 13:35:35.874086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.124 [2024-06-11 13:35:35.874144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.124 [2024-06-11 13:35:35.874155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.124 #22 NEW cov: 12138 ft: 14084 corp: 11/128b lim: 30 exec/s: 0 rss: 72Mb L: 14/22 MS: 1 ChangeByte- 00:07:43.124 [2024-06-11 13:35:35.913738] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x27 00:07:43.124 [2024-06-11 13:35:35.913988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.125 [2024-06-11 13:35:35.914014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.125 #23 NEW cov: 12138 ft: 14130 corp: 12/137b lim: 30 exec/s: 0 rss: 72Mb L: 9/22 MS: 1 ChangeBit- 00:07:43.125 [2024-06-11 13:35:35.963890] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90116) > buf size (4096) 00:07:43.125 [2024-06-11 13:35:35.964146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.125 [2024-06-11 13:35:35.964168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.125 #24 NEW cov: 12138 ft: 14163 corp: 13/146b lim: 30 exec/s: 0 rss: 72Mb L: 9/22 MS: 1 CrossOver- 00:07:43.125 [2024-06-11 13:35:36.014042] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.125 [2024-06-11 13:35:36.014184] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000aff 00:07:43.125 [2024-06-11 13:35:36.014435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.125 [2024-06-11 13:35:36.014457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.125 [2024-06-11 13:35:36.014515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.125 [2024-06-11 13:35:36.014527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.383 #25 NEW cov: 12138 ft: 14195 corp: 14/158b lim: 30 exec/s: 0 rss: 72Mb L: 12/22 MS: 1 ShuffleBytes- 00:07:43.383 [2024-06-11 13:35:36.064185] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:43.383 [2024-06-11 13:35:36.064336] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (64516) > buf size (4096) 00:07:43.383 [2024-06-11 13:35:36.064593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.383 [2024-06-11 13:35:36.064615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.383 [2024-06-11 13:35:36.064672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3f000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.383 [2024-06-11 13:35:36.064685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.383 #26 NEW cov: 12138 ft: 14252 corp: 15/174b lim: 30 exec/s: 0 rss: 72Mb L: 16/22 MS: 1 InsertByte- 00:07:43.383 [2024-06-11 13:35:36.114300] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.383 [2024-06-11 13:35:36.114552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:efff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.383 [2024-06-11 13:35:36.114575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.383 #27 NEW cov: 12138 ft: 14277 corp: 16/183b lim: 30 exec/s: 0 rss: 72Mb L: 9/22 MS: 1 ChangeBit- 00:07:43.383 [2024-06-11 13:35:36.164557] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:43.383 [2024-06-11 13:35:36.164819] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (35844) > buf size (4096) 00:07:43.383 [2024-06-11 13:35:36.165073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.383 [2024-06-11 13:35:36.165096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.383 [2024-06-11 13:35:36.165153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.383 [2024-06-11 13:35:36.165168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.383 [2024-06-11 13:35:36.165215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:23000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.383 [2024-06-11 13:35:36.165226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.383 #28 NEW cov: 12155 ft: 14387 corp: 17/206b lim: 30 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 PersAutoDict- DE: "X\000\000\000\000\000\000\000"- 00:07:43.383 [2024-06-11 13:35:36.204638] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.383 [2024-06-11 13:35:36.204777] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.383 [2024-06-11 13:35:36.204912] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.383 [2024-06-11 13:35:36.205157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:08ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.383 [2024-06-11 13:35:36.205179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.383 [2024-06-11 13:35:36.205233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.383 [2024-06-11 13:35:36.205245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.383 [2024-06-11 13:35:36.205295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.383 [2024-06-11 13:35:36.205306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.383 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:43.383 #29 NEW cov: 12178 ft: 14427 corp: 18/228b lim: 30 exec/s: 0 rss: 72Mb L: 22/23 MS: 1 ShuffleBytes- 00:07:43.384 [2024-06-11 13:35:36.254735] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90116) > buf size (4096) 00:07:43.384 [2024-06-11 13:35:36.254993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.384 [2024-06-11 13:35:36.255016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.384 #30 NEW cov: 12178 ft: 14445 corp: 19/237b lim: 30 exec/s: 0 rss: 72Mb L: 9/23 MS: 1 ChangeBinInt- 00:07:43.384 [2024-06-11 13:35:36.294879] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261476) > buf size (4096) 00:07:43.384 [2024-06-11 13:35:36.295143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff580000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.384 [2024-06-11 13:35:36.295165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.643 #31 NEW cov: 12178 ft: 14446 corp: 20/246b lim: 30 exec/s: 31 rss: 72Mb L: 9/23 MS: 1 PersAutoDict- DE: "X\000\000\000\000\000\000\000"- 00:07:43.643 [2024-06-11 13:35:36.334994] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90116) > buf size (4096) 00:07:43.643 [2024-06-11 13:35:36.335263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.643 [2024-06-11 13:35:36.335286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.643 #33 NEW cov: 12178 ft: 14462 corp: 21/253b lim: 30 exec/s: 33 rss: 72Mb L: 7/23 MS: 2 EraseBytes-CopyPart- 00:07:43.643 [2024-06-11 13:35:36.385147] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.643 [2024-06-11 13:35:36.385314] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.643 [2024-06-11 13:35:36.385559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.643 [2024-06-11 13:35:36.385583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.643 [2024-06-11 13:35:36.385642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.643 [2024-06-11 13:35:36.385654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.643 #34 NEW cov: 12178 ft: 14466 corp: 22/267b lim: 30 exec/s: 34 rss: 72Mb L: 14/23 MS: 1 ShuffleBytes- 00:07:43.643 [2024-06-11 13:35:36.425277] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:07:43.643 [2024-06-11 13:35:36.425527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.643 [2024-06-11 13:35:36.425550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.643 #35 NEW cov: 12178 ft: 14486 corp: 23/276b lim: 30 exec/s: 35 rss: 73Mb L: 9/23 MS: 1 CMP- DE: "\365\377\377\377"- 00:07:43.643 [2024-06-11 13:35:36.475455] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xc3e5 00:07:43.643 [2024-06-11 13:35:36.475597] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000aff 00:07:43.643 [2024-06-11 13:35:36.475845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b83000b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.643 [2024-06-11 13:35:36.475868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.643 [2024-06-11 13:35:36.475924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:030083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.643 [2024-06-11 13:35:36.475935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.643 #36 NEW cov: 12178 ft: 14500 corp: 24/288b lim: 30 exec/s: 36 rss: 73Mb L: 12/23 MS: 1 CMP- DE: "\233\203\013d\303\345\003\000"- 00:07:43.643 [2024-06-11 13:35:36.525571] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90116) > buf size (4096) 00:07:43.643 [2024-06-11 13:35:36.525822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.643 [2024-06-11 13:35:36.525844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.643 #37 NEW cov: 12178 ft: 14511 corp: 25/297b lim: 30 exec/s: 37 rss: 73Mb L: 9/23 MS: 1 ChangeByte- 00:07:43.901 [2024-06-11 13:35:36.565790] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.901 [2024-06-11 13:35:36.565932] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.901 [2024-06-11 13:35:36.566067] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.901 [2024-06-11 13:35:36.566210] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:43.901 [2024-06-11 13:35:36.566462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:08ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.901 [2024-06-11 13:35:36.566485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.901 [2024-06-11 13:35:36.566542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.901 [2024-06-11 13:35:36.566557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.901 [2024-06-11 13:35:36.566612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.901 [2024-06-11 13:35:36.566622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.901 [2024-06-11 13:35:36.566677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.901 [2024-06-11 13:35:36.566689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.901 #38 NEW cov: 12178 ft: 14989 corp: 26/325b lim: 30 exec/s: 38 rss: 73Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:07:43.901 [2024-06-11 13:35:36.605795] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x27 00:07:43.901 [2024-06-11 13:35:36.606061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.901 [2024-06-11 13:35:36.606084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.901 #39 NEW cov: 12178 ft: 15006 corp: 27/335b lim: 30 exec/s: 39 rss: 73Mb L: 10/28 MS: 1 InsertByte- 00:07:43.901 [2024-06-11 13:35:36.645958] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (524288) > buf size (4096) 00:07:43.901 [2024-06-11 13:35:36.646106] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000000ff 00:07:43.902 [2024-06-11 13:35:36.646250] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.902 [2024-06-11 13:35:36.646503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.902 [2024-06-11 13:35:36.646526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.902 [2024-06-11 13:35:36.646585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:78c383e5 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.902 [2024-06-11 13:35:36.646597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.902 [2024-06-11 13:35:36.646649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.902 [2024-06-11 13:35:36.646660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.902 #40 NEW cov: 12178 ft: 15028 corp: 28/355b lim: 30 exec/s: 40 rss: 73Mb L: 20/28 MS: 1 CMP- DE: "U\273\340x\303\345\003\000"- 00:07:43.902 [2024-06-11 13:35:36.686056] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:07:43.902 [2024-06-11 13:35:36.686313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000027 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.902 [2024-06-11 13:35:36.686336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.902 #41 NEW cov: 12178 ft: 15090 corp: 29/361b lim: 30 exec/s: 41 rss: 73Mb L: 6/28 MS: 1 EraseBytes- 00:07:43.902 [2024-06-11 13:35:36.726175] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0a 00:07:43.902 [2024-06-11 13:35:36.726438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:efff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.902 [2024-06-11 13:35:36.726461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.902 #42 NEW cov: 12178 ft: 15096 corp: 30/367b lim: 30 exec/s: 42 rss: 73Mb L: 6/28 MS: 1 EraseBytes- 00:07:43.902 [2024-06-11 13:35:36.776393] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1048576) > buf size (4096) 00:07:43.902 [2024-06-11 13:35:36.776659] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:07:43.902 [2024-06-11 13:35:36.776798] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:43.902 [2024-06-11 13:35:36.777051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.902 [2024-06-11 13:35:36.777074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.902 [2024-06-11 13:35:36.777125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.902 [2024-06-11 13:35:36.777137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.902 [2024-06-11 13:35:36.777188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.902 [2024-06-11 13:35:36.777202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.902 [2024-06-11 13:35:36.777258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.902 [2024-06-11 13:35:36.777269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.902 #43 NEW cov: 12178 ft: 15102 corp: 31/392b lim: 30 exec/s: 43 rss: 73Mb L: 25/28 MS: 1 InsertRepeatedBytes- 00:07:44.160 [2024-06-11 13:35:36.826456] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90116) > buf size (4096) 00:07:44.160 [2024-06-11 13:35:36.826707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.160 [2024-06-11 13:35:36.826730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.160 #44 NEW cov: 12178 ft: 15133 corp: 32/399b lim: 30 exec/s: 44 rss: 73Mb L: 7/28 MS: 1 CopyPart- 00:07:44.160 [2024-06-11 13:35:36.876637] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:44.160 [2024-06-11 13:35:36.876783] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (64516) > buf size (4096) 00:07:44.160 [2024-06-11 13:35:36.877042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0058 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.160 [2024-06-11 13:35:36.877065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.160 [2024-06-11 13:35:36.877124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3f000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.160 [2024-06-11 13:35:36.877137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.160 #45 NEW cov: 12178 ft: 15158 corp: 33/415b lim: 30 exec/s: 45 rss: 73Mb L: 16/28 MS: 1 ChangeBinInt- 00:07:44.160 [2024-06-11 13:35:36.926751] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90116) > buf size (4096) 00:07:44.160 [2024-06-11 13:35:36.927008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.160 [2024-06-11 13:35:36.927031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.160 #46 NEW cov: 12178 ft: 15165 corp: 34/424b lim: 30 exec/s: 46 rss: 73Mb L: 9/28 MS: 1 ChangeByte- 00:07:44.161 [2024-06-11 13:35:36.966962] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.161 [2024-06-11 13:35:36.967107] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.161 [2024-06-11 13:35:36.967252] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.161 [2024-06-11 13:35:36.967387] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:44.161 [2024-06-11 13:35:36.967639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:110083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.161 [2024-06-11 13:35:36.967662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.161 [2024-06-11 13:35:36.967718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.161 [2024-06-11 13:35:36.967730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.161 [2024-06-11 13:35:36.967801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.161 [2024-06-11 13:35:36.967813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.161 [2024-06-11 13:35:36.967869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.161 [2024-06-11 13:35:36.967880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.161 #47 NEW cov: 12178 ft: 15238 corp: 35/452b lim: 30 exec/s: 47 rss: 73Mb L: 28/28 MS: 1 CMP- DE: "\021\000"- 00:07:44.161 [2024-06-11 13:35:37.017024] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (159248) > buf size (4096) 00:07:44.161 [2024-06-11 13:35:37.017305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b83000b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.161 [2024-06-11 13:35:37.017329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.161 #48 NEW cov: 12178 ft: 15286 corp: 36/462b lim: 30 exec/s: 48 rss: 74Mb L: 10/28 MS: 1 EraseBytes- 00:07:44.161 [2024-06-11 13:35:37.067178] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90116) > buf size (4096) 00:07:44.161 [2024-06-11 13:35:37.067440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.161 [2024-06-11 13:35:37.067462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.419 [2024-06-11 13:35:37.107283] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (90116) > buf size (4096) 00:07:44.419 [2024-06-11 13:35:37.107544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.419 [2024-06-11 13:35:37.107567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.419 #50 NEW cov: 12178 ft: 15299 corp: 37/471b lim: 30 exec/s: 50 rss: 74Mb L: 9/28 MS: 2 ChangeBit-ChangeByte- 00:07:44.419 [2024-06-11 13:35:37.147503] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000cdff 00:07:44.419 [2024-06-11 13:35:37.147649] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.419 [2024-06-11 13:35:37.147785] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.419 [2024-06-11 13:35:37.147922] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:44.419 [2024-06-11 13:35:37.148174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:110083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.419 [2024-06-11 13:35:37.148205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.419 [2024-06-11 13:35:37.148262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.419 [2024-06-11 13:35:37.148273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.419 [2024-06-11 13:35:37.148329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.148341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.420 [2024-06-11 13:35:37.148396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.148407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.420 [2024-06-11 13:35:37.197613] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000cdff 00:07:44.420 [2024-06-11 13:35:37.197764] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:44.420 [2024-06-11 13:35:37.197901] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000003ff 00:07:44.420 [2024-06-11 13:35:37.198042] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:07:44.420 [2024-06-11 13:35:37.198306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:110083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.198329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.420 [2024-06-11 13:35:37.198385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.198397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.420 [2024-06-11 13:35:37.198451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.198462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.420 [2024-06-11 13:35:37.198514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.198525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.420 #52 NEW cov: 12178 ft: 15311 corp: 38/500b lim: 30 exec/s: 52 rss: 74Mb L: 29/29 MS: 2 InsertByte-ChangeBinInt- 00:07:44.420 [2024-06-11 13:35:37.237687] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x303 00:07:44.420 [2024-06-11 13:35:37.237834] ctrlr.c:2628:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (789520) > buf size (4096) 00:07:44.420 [2024-06-11 13:35:37.238078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:58000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.238101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.420 [2024-06-11 13:35:37.238157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:03038303 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.238168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.420 #53 NEW cov: 12178 ft: 15317 corp: 39/515b lim: 30 exec/s: 53 rss: 74Mb L: 15/29 MS: 1 InsertRepeatedBytes- 00:07:44.420 [2024-06-11 13:35:37.287817] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xc3e5 00:07:44.420 [2024-06-11 13:35:37.287960] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000aff 00:07:44.420 [2024-06-11 13:35:37.288206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b83000b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.288229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.420 [2024-06-11 13:35:37.288284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:030083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.288296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.420 #54 NEW cov: 12178 ft: 15320 corp: 40/527b lim: 30 exec/s: 54 rss: 74Mb L: 12/29 MS: 1 ShuffleBytes- 00:07:44.420 [2024-06-11 13:35:37.327994] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xc3e5 00:07:44.420 [2024-06-11 13:35:37.328138] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xc3e5 00:07:44.420 [2024-06-11 13:35:37.328285] ctrlr.c:2616:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:07:44.420 [2024-06-11 13:35:37.328539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:9b83000b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.328562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.420 [2024-06-11 13:35:37.328617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:9b83000b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.328629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.420 [2024-06-11 13:35:37.328682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:03000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.420 [2024-06-11 13:35:37.328693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.678 #55 NEW cov: 12178 ft: 15324 corp: 41/547b lim: 30 exec/s: 27 rss: 74Mb L: 20/29 MS: 1 PersAutoDict- DE: "\233\203\013d\303\345\003\000"- 00:07:44.678 #55 DONE cov: 12178 ft: 15324 corp: 41/547b lim: 30 exec/s: 27 rss: 74Mb 00:07:44.678 ###### Recommended dictionary. ###### 00:07:44.678 "X\000\000\000\000\000\000\000" # Uses: 4 00:07:44.678 "\365\377\377\377" # Uses: 0 00:07:44.678 "\233\203\013d\303\345\003\000" # Uses: 1 00:07:44.678 "U\273\340x\303\345\003\000" # Uses: 0 00:07:44.678 "\021\000" # Uses: 0 00:07:44.678 ###### End of recommended dictionary. ###### 00:07:44.678 Done 55 runs in 2 second(s) 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4402 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:44.678 13:35:37 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:07:44.678 [2024-06-11 13:35:37.540969] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:44.678 [2024-06-11 13:35:37.541053] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437674 ] 00:07:44.678 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.936 [2024-06-11 13:35:37.751956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.936 [2024-06-11 13:35:37.836593] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.194 [2024-06-11 13:35:37.900594] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.194 [2024-06-11 13:35:37.916960] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:45.194 INFO: Running with entropic power schedule (0xFF, 100). 00:07:45.194 INFO: Seed: 3560604326 00:07:45.194 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:07:45.194 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:07:45.194 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:45.194 INFO: A corpus is not provided, starting from an empty corpus 00:07:45.194 #2 INITED exec/s: 0 rss: 65Mb 00:07:45.194 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:45.194 This may also happen if the target rejected all inputs we tried so far 00:07:45.194 [2024-06-11 13:35:37.962500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:121200ec cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.194 [2024-06-11 13:35:37.962527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.452 NEW_FUNC[1/686]: 0x486230 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:45.452 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:45.452 #19 NEW cov: 11850 ft: 11850 corp: 2/13b lim: 35 exec/s: 0 rss: 72Mb L: 12/12 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:45.452 [2024-06-11 13:35:38.152877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:121200ec cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.152909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.452 #20 NEW cov: 11980 ft: 12404 corp: 3/25b lim: 35 exec/s: 0 rss: 72Mb L: 12/12 MS: 1 ShuffleBytes- 00:07:45.452 [2024-06-11 13:35:38.203151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:121200ec cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.203174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.452 [2024-06-11 13:35:38.203257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.203270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.452 #21 NEW cov: 11986 ft: 12967 corp: 4/40b lim: 35 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 CrossOver- 00:07:45.452 [2024-06-11 13:35:38.243390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.243414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.452 [2024-06-11 13:35:38.243486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fafa00fa cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.243498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.452 [2024-06-11 13:35:38.243554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:fafa00fa cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.243566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.452 #22 NEW cov: 12071 ft: 13422 corp: 5/66b lim: 35 exec/s: 0 rss: 72Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:45.452 [2024-06-11 13:35:38.283424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.283446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.452 [2024-06-11 13:35:38.283516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.283528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.452 #23 NEW cov: 12071 ft: 13538 corp: 6/81b lim: 35 exec/s: 0 rss: 72Mb L: 15/26 MS: 1 ShuffleBytes- 00:07:45.452 [2024-06-11 13:35:38.334041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.334064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.452 [2024-06-11 13:35:38.334120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.334132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.452 [2024-06-11 13:35:38.334186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0012 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.334200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.452 [2024-06-11 13:35:38.334270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.334282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.452 [2024-06-11 13:35:38.334335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.452 [2024-06-11 13:35:38.334346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.712 #24 NEW cov: 12071 ft: 14121 corp: 7/116b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:45.712 [2024-06-11 13:35:38.383530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:eded0014 cdw11:ed00eded SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.383554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.712 #25 NEW cov: 12071 ft: 14191 corp: 8/128b lim: 35 exec/s: 0 rss: 72Mb L: 12/35 MS: 1 ChangeBinInt- 00:07:45.712 [2024-06-11 13:35:38.434157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.434181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.712 [2024-06-11 13:35:38.434255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.434267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.712 [2024-06-11 13:35:38.434319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.434330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.712 [2024-06-11 13:35:38.434385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.434396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.712 #26 NEW cov: 12071 ft: 14245 corp: 9/160b lim: 35 exec/s: 0 rss: 72Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:07:45.712 [2024-06-11 13:35:38.473914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.473937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.712 [2024-06-11 13:35:38.473990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.474002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.712 #27 NEW cov: 12071 ft: 14274 corp: 10/175b lim: 35 exec/s: 0 rss: 72Mb L: 15/35 MS: 1 ShuffleBytes- 00:07:45.712 [2024-06-11 13:35:38.514083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.514107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.712 [2024-06-11 13:35:38.514177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fafa00fa cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.514189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.712 #28 NEW cov: 12071 ft: 14343 corp: 11/192b lim: 35 exec/s: 0 rss: 72Mb L: 17/35 MS: 1 EraseBytes- 00:07:45.712 [2024-06-11 13:35:38.564214] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.564238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.712 [2024-06-11 13:35:38.564313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.564328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.712 #29 NEW cov: 12071 ft: 14411 corp: 12/207b lim: 35 exec/s: 0 rss: 72Mb L: 15/35 MS: 1 ChangeBit- 00:07:45.712 [2024-06-11 13:35:38.604459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.604482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.712 [2024-06-11 13:35:38.604538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fafa00fa cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.604549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.712 [2024-06-11 13:35:38.604603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:fafa004a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.712 [2024-06-11 13:35:38.604614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.020 #30 NEW cov: 12071 ft: 14449 corp: 13/233b lim: 35 exec/s: 0 rss: 72Mb L: 26/35 MS: 1 ChangeByte- 00:07:46.020 [2024-06-11 13:35:38.644344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.020 [2024-06-11 13:35:38.644367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.020 #31 NEW cov: 12071 ft: 14475 corp: 14/242b lim: 35 exec/s: 0 rss: 72Mb L: 9/35 MS: 1 CrossOver- 00:07:46.020 [2024-06-11 13:35:38.684907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.020 [2024-06-11 13:35:38.684930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.020 [2024-06-11 13:35:38.684987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.020 [2024-06-11 13:35:38.684998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.020 [2024-06-11 13:35:38.685070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.020 [2024-06-11 13:35:38.685081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.020 [2024-06-11 13:35:38.685136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffef00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.020 [2024-06-11 13:35:38.685147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.020 #32 NEW cov: 12071 ft: 14507 corp: 15/274b lim: 35 exec/s: 0 rss: 72Mb L: 32/35 MS: 1 ChangeBit- 00:07:46.020 [2024-06-11 13:35:38.734619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:121200ec cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.020 [2024-06-11 13:35:38.734643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.020 #33 NEW cov: 12071 ft: 14561 corp: 16/281b lim: 35 exec/s: 0 rss: 72Mb L: 7/35 MS: 1 EraseBytes- 00:07:46.020 [2024-06-11 13:35:38.774847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120013 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.020 [2024-06-11 13:35:38.774870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.020 [2024-06-11 13:35:38.774928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.020 [2024-06-11 13:35:38.774940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.020 #34 NEW cov: 12071 ft: 14595 corp: 17/296b lim: 35 exec/s: 0 rss: 72Mb L: 15/35 MS: 1 ChangeBit- 00:07:46.021 [2024-06-11 13:35:38.815286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.815309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.021 [2024-06-11 13:35:38.815364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.815375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.021 [2024-06-11 13:35:38.815429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.815441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.021 [2024-06-11 13:35:38.815495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ff1200ff cdw11:ff001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.815506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.021 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:46.021 #35 NEW cov: 12094 ft: 14611 corp: 18/328b lim: 35 exec/s: 0 rss: 72Mb L: 32/35 MS: 1 CrossOver- 00:07:46.021 [2024-06-11 13:35:38.865627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.865652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.021 [2024-06-11 13:35:38.865708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12160012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.865720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.021 [2024-06-11 13:35:38.865776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0012 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.865788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.021 [2024-06-11 13:35:38.865843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.865854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.021 [2024-06-11 13:35:38.865910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.865921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.021 #36 NEW cov: 12094 ft: 14624 corp: 19/363b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:46.021 [2024-06-11 13:35:38.915675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.915699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.021 [2024-06-11 13:35:38.915759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fafa00fa cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.915771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.021 [2024-06-11 13:35:38.915825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:fa5b00fa cdw11:5b005b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.915837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.021 [2024-06-11 13:35:38.915895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:fafa005b cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.021 [2024-06-11 13:35:38.915906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.303 #37 NEW cov: 12094 ft: 14631 corp: 20/395b lim: 35 exec/s: 0 rss: 73Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:07:46.303 [2024-06-11 13:35:38.955427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:38.955450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.303 [2024-06-11 13:35:38.955506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:38.955517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.303 #38 NEW cov: 12094 ft: 14654 corp: 21/411b lim: 35 exec/s: 38 rss: 73Mb L: 16/35 MS: 1 InsertByte- 00:07:46.303 [2024-06-11 13:35:39.005588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec020012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.005610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.303 [2024-06-11 13:35:39.005680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.005693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.303 #39 NEW cov: 12094 ft: 14667 corp: 22/426b lim: 35 exec/s: 39 rss: 73Mb L: 15/35 MS: 1 ChangeBit- 00:07:46.303 [2024-06-11 13:35:39.045681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00f7fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.045703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.303 [2024-06-11 13:35:39.045773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fafa00fa cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.045785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.303 #40 NEW cov: 12094 ft: 14710 corp: 23/443b lim: 35 exec/s: 40 rss: 73Mb L: 17/35 MS: 1 ChangeBinInt- 00:07:46.303 [2024-06-11 13:35:39.095700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.095722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.303 #41 NEW cov: 12094 ft: 14727 corp: 24/452b lim: 35 exec/s: 41 rss: 73Mb L: 9/35 MS: 1 ShuffleBytes- 00:07:46.303 [2024-06-11 13:35:39.146180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:faba000a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.146209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.303 [2024-06-11 13:35:39.146280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fafa00fa cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.146292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.303 [2024-06-11 13:35:39.146344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:fafa004a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.146355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.303 #42 NEW cov: 12094 ft: 14751 corp: 25/478b lim: 35 exec/s: 42 rss: 73Mb L: 26/35 MS: 1 ChangeBit- 00:07:46.303 [2024-06-11 13:35:39.196594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.196617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.303 [2024-06-11 13:35:39.196690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.196702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.303 [2024-06-11 13:35:39.196755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:1200ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.196766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.303 [2024-06-11 13:35:39.196818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff0012 cdw11:ef00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.196829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.303 [2024-06-11 13:35:39.196884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.303 [2024-06-11 13:35:39.196896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.562 #43 NEW cov: 12094 ft: 14764 corp: 26/513b lim: 35 exec/s: 43 rss: 73Mb L: 35/35 MS: 1 CrossOver- 00:07:46.562 [2024-06-11 13:35:39.236098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.562 [2024-06-11 13:35:39.236120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.562 #44 NEW cov: 12094 ft: 14811 corp: 27/524b lim: 35 exec/s: 44 rss: 73Mb L: 11/35 MS: 1 EraseBytes- 00:07:46.562 [2024-06-11 13:35:39.286537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.562 [2024-06-11 13:35:39.286558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.562 [2024-06-11 13:35:39.286614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:faa800fa cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.562 [2024-06-11 13:35:39.286625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.562 [2024-06-11 13:35:39.286680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:fafa00fa cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.562 [2024-06-11 13:35:39.286693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.562 #45 NEW cov: 12094 ft: 14830 corp: 28/550b lim: 35 exec/s: 45 rss: 73Mb L: 26/35 MS: 1 ChangeByte- 00:07:46.562 [2024-06-11 13:35:39.326953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.562 [2024-06-11 13:35:39.326975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.562 [2024-06-11 13:35:39.327028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12160012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.562 [2024-06-11 13:35:39.327039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.562 [2024-06-11 13:35:39.327091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:16120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.562 [2024-06-11 13:35:39.327102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.562 [2024-06-11 13:35:39.327155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.562 [2024-06-11 13:35:39.327166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.562 [2024-06-11 13:35:39.327212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.327223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.563 #46 NEW cov: 12094 ft: 14844 corp: 29/585b lim: 35 exec/s: 46 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:07:46.563 [2024-06-11 13:35:39.376360] ctrlr.c:2710:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:46.563 [2024-06-11 13:35:39.377001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.377026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.563 [2024-06-11 13:35:39.377078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fafa0020 cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.377090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.563 [2024-06-11 13:35:39.377144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:fa5b00fa cdw11:5b005b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.377154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.563 [2024-06-11 13:35:39.377205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:fafa005b cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.377216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.563 #47 NEW cov: 12103 ft: 14889 corp: 30/617b lim: 35 exec/s: 47 rss: 74Mb L: 32/35 MS: 1 ChangeBinInt- 00:07:46.563 [2024-06-11 13:35:39.427260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.427282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.563 [2024-06-11 13:35:39.427338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.427351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.563 [2024-06-11 13:35:39.427406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:12120012 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.427416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.563 [2024-06-11 13:35:39.427471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ef00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.427482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.563 [2024-06-11 13:35:39.427536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.427546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.563 #53 NEW cov: 12103 ft: 14938 corp: 31/652b lim: 35 exec/s: 53 rss: 74Mb L: 35/35 MS: 1 CopyPart- 00:07:46.563 [2024-06-11 13:35:39.466871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:121200ec cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.466893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.563 [2024-06-11 13:35:39.466947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff0012 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.563 [2024-06-11 13:35:39.466958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.821 #54 NEW cov: 12103 ft: 14958 corp: 32/672b lim: 35 exec/s: 54 rss: 74Mb L: 20/35 MS: 1 CMP- DE: "\377\377\377\377\377\377\377Z"- 00:07:46.821 [2024-06-11 13:35:39.507190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00fa2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.821 [2024-06-11 13:35:39.507215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.821 [2024-06-11 13:35:39.507273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fafa00fa cdw11:fa00a8fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.821 [2024-06-11 13:35:39.507285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.821 [2024-06-11 13:35:39.507357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:fafa00fa cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.822 [2024-06-11 13:35:39.507368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.822 #55 NEW cov: 12103 ft: 15000 corp: 33/699b lim: 35 exec/s: 55 rss: 74Mb L: 27/35 MS: 1 InsertByte- 00:07:46.822 [2024-06-11 13:35:39.557194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00f7fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.822 [2024-06-11 13:35:39.557220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.822 [2024-06-11 13:35:39.557273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fafa00fa cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.822 [2024-06-11 13:35:39.557284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.822 #56 NEW cov: 12103 ft: 15012 corp: 34/716b lim: 35 exec/s: 56 rss: 74Mb L: 17/35 MS: 1 ChangeBinInt- 00:07:46.822 [2024-06-11 13:35:39.607330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.822 [2024-06-11 13:35:39.607351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.822 [2024-06-11 13:35:39.607406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:121200fa cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.822 [2024-06-11 13:35:39.607418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.822 [2024-06-11 13:35:39.657498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:f8fa000a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.822 [2024-06-11 13:35:39.657519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.822 [2024-06-11 13:35:39.657572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:121200fa cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.822 [2024-06-11 13:35:39.657583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.822 #58 NEW cov: 12103 ft: 15017 corp: 35/734b lim: 35 exec/s: 58 rss: 74Mb L: 18/35 MS: 2 CrossOver-ChangeBit- 00:07:46.822 [2024-06-11 13:35:39.697443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:121900ec cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.822 [2024-06-11 13:35:39.697465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.822 #59 NEW cov: 12103 ft: 15021 corp: 36/746b lim: 35 exec/s: 59 rss: 74Mb L: 12/35 MS: 1 ChangeBinInt- 00:07:47.081 [2024-06-11 13:35:39.737759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:fafa000a cdw11:fa00fafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.081 [2024-06-11 13:35:39.737782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.081 [2024-06-11 13:35:39.737837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:fafa00fa cdw11:fa00f8fa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.081 [2024-06-11 13:35:39.737849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.081 #60 NEW cov: 12103 ft: 15061 corp: 37/763b lim: 35 exec/s: 60 rss: 74Mb L: 17/35 MS: 1 ChangeBit- 00:07:47.081 [2024-06-11 13:35:39.777689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.081 [2024-06-11 13:35:39.777713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.081 #61 NEW cov: 12103 ft: 15073 corp: 38/772b lim: 35 exec/s: 61 rss: 74Mb L: 9/35 MS: 1 EraseBytes- 00:07:47.081 [2024-06-11 13:35:39.817980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.081 [2024-06-11 13:35:39.818004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.081 [2024-06-11 13:35:39.818075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:1200127e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.081 [2024-06-11 13:35:39.818087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.081 #62 NEW cov: 12103 ft: 15088 corp: 39/788b lim: 35 exec/s: 62 rss: 74Mb L: 16/35 MS: 1 InsertByte- 00:07:47.081 [2024-06-11 13:35:39.858070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:12a200ec cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.081 [2024-06-11 13:35:39.858096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.081 [2024-06-11 13:35:39.858150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.081 [2024-06-11 13:35:39.858161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.081 #63 NEW cov: 12103 ft: 15115 corp: 40/804b lim: 35 exec/s: 63 rss: 74Mb L: 16/35 MS: 1 InsertByte- 00:07:47.081 [2024-06-11 13:35:39.898014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.081 [2024-06-11 13:35:39.898038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.081 #64 NEW cov: 12103 ft: 15122 corp: 41/815b lim: 35 exec/s: 64 rss: 74Mb L: 11/35 MS: 1 ChangeByte- 00:07:47.081 [2024-06-11 13:35:39.948344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ec120012 cdw11:12001212 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.081 [2024-06-11 13:35:39.948369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.081 [2024-06-11 13:35:39.948422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:12120012 cdw11:12001216 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:47.081 [2024-06-11 13:35:39.948435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.081 #65 NEW cov: 12103 ft: 15133 corp: 42/831b lim: 35 exec/s: 32 rss: 74Mb L: 16/35 MS: 1 ChangeBit- 00:07:47.081 #65 DONE cov: 12103 ft: 15133 corp: 42/831b lim: 35 exec/s: 32 rss: 74Mb 00:07:47.081 ###### Recommended dictionary. ###### 00:07:47.081 "\377\377\377\377\377\377\377Z" # Uses: 0 00:07:47.081 ###### End of recommended dictionary. ###### 00:07:47.081 Done 65 runs in 2 second(s) 00:07:47.341 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:07:47.341 13:35:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:47.341 13:35:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:47.341 13:35:40 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:47.341 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4403 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:47.342 13:35:40 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:07:47.342 [2024-06-11 13:35:40.159640] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:47.342 [2024-06-11 13:35:40.159705] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3438115 ] 00:07:47.342 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.601 [2024-06-11 13:35:40.369311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.601 [2024-06-11 13:35:40.453255] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.860 [2024-06-11 13:35:40.517149] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.860 [2024-06-11 13:35:40.533507] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:47.860 INFO: Running with entropic power schedule (0xFF, 100). 00:07:47.860 INFO: Seed: 1881621695 00:07:47.860 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:07:47.860 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:07:47.860 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:47.860 INFO: A corpus is not provided, starting from an empty corpus 00:07:47.860 #2 INITED exec/s: 0 rss: 64Mb 00:07:47.860 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:47.860 This may also happen if the target rejected all inputs we tried so far 00:07:48.119 NEW_FUNC[1/675]: 0x487f00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:48.119 NEW_FUNC[2/675]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:48.119 #7 NEW cov: 11772 ft: 11736 corp: 2/17b lim: 20 exec/s: 0 rss: 71Mb L: 16/16 MS: 5 ShuffleBytes-ChangeBit-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:48.119 #8 NEW cov: 11903 ft: 12170 corp: 3/36b lim: 20 exec/s: 0 rss: 71Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:07:48.119 #9 NEW cov: 11909 ft: 12474 corp: 4/52b lim: 20 exec/s: 0 rss: 72Mb L: 16/19 MS: 1 ShuffleBytes- 00:07:48.119 #10 NEW cov: 11994 ft: 12828 corp: 5/72b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:48.377 #11 NEW cov: 11994 ft: 12937 corp: 6/91b lim: 20 exec/s: 0 rss: 72Mb L: 19/20 MS: 1 ChangeBit- 00:07:48.377 #12 NEW cov: 11994 ft: 12993 corp: 7/110b lim: 20 exec/s: 0 rss: 72Mb L: 19/20 MS: 1 InsertRepeatedBytes- 00:07:48.377 #13 NEW cov: 11994 ft: 13118 corp: 8/129b lim: 20 exec/s: 0 rss: 72Mb L: 19/20 MS: 1 CrossOver- 00:07:48.377 #14 NEW cov: 11994 ft: 13234 corp: 9/149b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 ChangeBit- 00:07:48.377 #15 NEW cov: 11994 ft: 13284 corp: 10/168b lim: 20 exec/s: 0 rss: 72Mb L: 19/20 MS: 1 CrossOver- 00:07:48.637 #16 NEW cov: 11994 ft: 13323 corp: 11/187b lim: 20 exec/s: 0 rss: 72Mb L: 19/20 MS: 1 ChangeBit- 00:07:48.637 #17 NEW cov: 11994 ft: 13348 corp: 12/206b lim: 20 exec/s: 0 rss: 72Mb L: 19/20 MS: 1 ChangeBinInt- 00:07:48.637 #18 NEW cov: 11994 ft: 13402 corp: 13/222b lim: 20 exec/s: 0 rss: 72Mb L: 16/20 MS: 1 ChangeByte- 00:07:48.637 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:48.637 #19 NEW cov: 12017 ft: 13455 corp: 14/242b lim: 20 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 InsertByte- 00:07:48.895 #20 NEW cov: 12022 ft: 13810 corp: 15/252b lim: 20 exec/s: 20 rss: 72Mb L: 10/20 MS: 1 CrossOver- 00:07:48.895 #21 NEW cov: 12022 ft: 13869 corp: 16/268b lim: 20 exec/s: 21 rss: 72Mb L: 16/20 MS: 1 ChangeBit- 00:07:48.895 #22 NEW cov: 12022 ft: 13886 corp: 17/287b lim: 20 exec/s: 22 rss: 72Mb L: 19/20 MS: 1 CopyPart- 00:07:48.895 #23 NEW cov: 12022 ft: 13939 corp: 18/306b lim: 20 exec/s: 23 rss: 72Mb L: 19/20 MS: 1 ChangeByte- 00:07:48.895 #24 NEW cov: 12022 ft: 14001 corp: 19/326b lim: 20 exec/s: 24 rss: 72Mb L: 20/20 MS: 1 ChangeBit- 00:07:49.154 #25 NEW cov: 12022 ft: 14038 corp: 20/345b lim: 20 exec/s: 25 rss: 72Mb L: 19/20 MS: 1 CopyPart- 00:07:49.154 #26 NEW cov: 12022 ft: 14047 corp: 21/365b lim: 20 exec/s: 26 rss: 72Mb L: 20/20 MS: 1 CrossOver- 00:07:49.154 #27 NEW cov: 12022 ft: 14069 corp: 22/376b lim: 20 exec/s: 27 rss: 72Mb L: 11/20 MS: 1 EraseBytes- 00:07:49.154 #28 NEW cov: 12022 ft: 14078 corp: 23/396b lim: 20 exec/s: 28 rss: 72Mb L: 20/20 MS: 1 InsertByte- 00:07:49.420 #29 NEW cov: 12022 ft: 14100 corp: 24/404b lim: 20 exec/s: 29 rss: 72Mb L: 8/20 MS: 1 EraseBytes- 00:07:49.420 #30 NEW cov: 12022 ft: 14110 corp: 25/424b lim: 20 exec/s: 30 rss: 72Mb L: 20/20 MS: 1 CopyPart- 00:07:49.420 #31 NEW cov: 12022 ft: 14130 corp: 26/443b lim: 20 exec/s: 31 rss: 73Mb L: 19/20 MS: 1 ChangeBinInt- 00:07:49.420 #32 NEW cov: 12022 ft: 14154 corp: 27/463b lim: 20 exec/s: 32 rss: 73Mb L: 20/20 MS: 1 CopyPart- 00:07:49.683 #33 NEW cov: 12022 ft: 14178 corp: 28/483b lim: 20 exec/s: 33 rss: 73Mb L: 20/20 MS: 1 ChangeBinInt- 00:07:49.683 #34 NEW cov: 12022 ft: 14186 corp: 29/499b lim: 20 exec/s: 34 rss: 73Mb L: 16/20 MS: 1 ShuffleBytes- 00:07:49.683 #35 NEW cov: 12022 ft: 14196 corp: 30/519b lim: 20 exec/s: 35 rss: 73Mb L: 20/20 MS: 1 InsertByte- 00:07:49.683 #36 NEW cov: 12022 ft: 14198 corp: 31/539b lim: 20 exec/s: 36 rss: 73Mb L: 20/20 MS: 1 CopyPart- 00:07:49.683 #37 NEW cov: 12022 ft: 14207 corp: 32/556b lim: 20 exec/s: 37 rss: 73Mb L: 17/20 MS: 1 EraseBytes- 00:07:49.943 #38 NEW cov: 12022 ft: 14214 corp: 33/576b lim: 20 exec/s: 19 rss: 73Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:07:49.943 #38 DONE cov: 12022 ft: 14214 corp: 33/576b lim: 20 exec/s: 19 rss: 73Mb 00:07:49.943 Done 38 runs in 2 second(s) 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4404 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:49.943 13:35:42 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:07:49.943 [2024-06-11 13:35:42.802318] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:49.943 [2024-06-11 13:35:42.802402] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3438506 ] 00:07:49.943 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.201 [2024-06-11 13:35:43.112725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.460 [2024-06-11 13:35:43.220223] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.460 [2024-06-11 13:35:43.284153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.460 [2024-06-11 13:35:43.300522] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:50.460 INFO: Running with entropic power schedule (0xFF, 100). 00:07:50.460 INFO: Seed: 353664391 00:07:50.460 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:07:50.460 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:07:50.460 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:50.460 INFO: A corpus is not provided, starting from an empty corpus 00:07:50.460 #2 INITED exec/s: 0 rss: 64Mb 00:07:50.460 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:50.460 This may also happen if the target rejected all inputs we tried so far 00:07:50.460 [2024-06-11 13:35:43.356959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a3ede00 cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.460 [2024-06-11 13:35:43.356995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.460 [2024-06-11 13:35:43.357069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.460 [2024-06-11 13:35:43.357087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.460 [2024-06-11 13:35:43.357157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.460 [2024-06-11 13:35:43.357173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.460 [2024-06-11 13:35:43.357240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.460 [2024-06-11 13:35:43.357257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.719 NEW_FUNC[1/687]: 0x488ff0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:50.719 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:50.719 #11 NEW cov: 11868 ft: 11869 corp: 2/30b lim: 35 exec/s: 0 rss: 71Mb L: 29/29 MS: 4 ShuffleBytes-CMP-ChangeByte-InsertRepeatedBytes- DE: "\001\000"- 00:07:50.719 [2024-06-11 13:35:43.516766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36360a36 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.719 [2024-06-11 13:35:43.516810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.719 [2024-06-11 13:35:43.516877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.719 [2024-06-11 13:35:43.516895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.719 #13 NEW cov: 12001 ft: 12837 corp: 3/48b lim: 35 exec/s: 0 rss: 71Mb L: 18/29 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:50.719 [2024-06-11 13:35:43.576624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:007f0a01 cdw11:b1980000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.719 [2024-06-11 13:35:43.576659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.719 #14 NEW cov: 12007 ft: 13691 corp: 4/57b lim: 35 exec/s: 0 rss: 71Mb L: 9/29 MS: 1 CMP- DE: "\001\000\177\261\230\000\034,"- 00:07:50.977 [2024-06-11 13:35:43.637010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36360a36 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.977 [2024-06-11 13:35:43.637044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.977 [2024-06-11 13:35:43.637110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.977 [2024-06-11 13:35:43.637127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.977 #15 NEW cov: 12092 ft: 13927 corp: 5/76b lim: 35 exec/s: 0 rss: 71Mb L: 19/29 MS: 1 InsertByte- 00:07:50.977 [2024-06-11 13:35:43.717263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36360a36 cdw11:36a70000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.977 [2024-06-11 13:35:43.717296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.977 [2024-06-11 13:35:43.717361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.977 [2024-06-11 13:35:43.717378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.977 #16 NEW cov: 12092 ft: 14016 corp: 6/94b lim: 35 exec/s: 0 rss: 71Mb L: 18/29 MS: 1 ChangeByte- 00:07:50.977 [2024-06-11 13:35:43.767427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:007f0a01 cdw11:b1980000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.977 [2024-06-11 13:35:43.767460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.977 [2024-06-11 13:35:43.767527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:007f1c01 cdw11:b1980000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.977 [2024-06-11 13:35:43.767545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.977 #17 NEW cov: 12092 ft: 14084 corp: 7/111b lim: 35 exec/s: 0 rss: 72Mb L: 17/29 MS: 1 CopyPart- 00:07:50.977 [2024-06-11 13:35:43.847611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36360a36 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.977 [2024-06-11 13:35:43.847645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.977 [2024-06-11 13:35:43.847710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36361636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:50.977 [2024-06-11 13:35:43.847732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.235 #18 NEW cov: 12092 ft: 14165 corp: 8/130b lim: 35 exec/s: 0 rss: 72Mb L: 19/29 MS: 1 ChangeBit- 00:07:51.235 [2024-06-11 13:35:43.928073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.235 [2024-06-11 13:35:43.928107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.235 [2024-06-11 13:35:43.928177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:7fb10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.235 [2024-06-11 13:35:43.928194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.235 [2024-06-11 13:35:43.928268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0100001c cdw11:7fb10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.235 [2024-06-11 13:35:43.928285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.235 #19 NEW cov: 12092 ft: 14394 corp: 9/155b lim: 35 exec/s: 0 rss: 72Mb L: 25/29 MS: 1 InsertRepeatedBytes- 00:07:51.235 [2024-06-11 13:35:44.007878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2d7f0a01 cdw11:b1980000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.235 [2024-06-11 13:35:44.007912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.235 #20 NEW cov: 12092 ft: 14427 corp: 10/164b lim: 35 exec/s: 0 rss: 72Mb L: 9/29 MS: 1 ChangeByte- 00:07:51.235 [2024-06-11 13:35:44.068242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.235 [2024-06-11 13:35:44.068277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.235 [2024-06-11 13:35:44.068348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.235 [2024-06-11 13:35:44.068366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.236 #23 NEW cov: 12092 ft: 14509 corp: 11/182b lim: 35 exec/s: 0 rss: 72Mb L: 18/29 MS: 3 ShuffleBytes-ShuffleBytes-CrossOver- 00:07:51.236 [2024-06-11 13:35:44.118649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36360a36 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.236 [2024-06-11 13:35:44.118683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.236 [2024-06-11 13:35:44.118750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.236 [2024-06-11 13:35:44.118768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.236 [2024-06-11 13:35:44.118838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:01003636 cdw11:7fb10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.236 [2024-06-11 13:35:44.118855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.494 #24 NEW cov: 12092 ft: 14542 corp: 12/208b lim: 35 exec/s: 0 rss: 72Mb L: 26/29 MS: 1 PersAutoDict- DE: "\001\000\177\261\230\000\034,"- 00:07:51.494 [2024-06-11 13:35:44.168319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.494 [2024-06-11 13:35:44.168352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.494 #25 NEW cov: 12092 ft: 14571 corp: 13/218b lim: 35 exec/s: 0 rss: 72Mb L: 10/29 MS: 1 CrossOver- 00:07:51.494 [2024-06-11 13:35:44.219115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.494 [2024-06-11 13:35:44.219148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.494 [2024-06-11 13:35:44.219224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.494 [2024-06-11 13:35:44.219241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.494 [2024-06-11 13:35:44.219309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:36363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.494 [2024-06-11 13:35:44.219326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.494 [2024-06-11 13:35:44.219400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:36363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.494 [2024-06-11 13:35:44.219416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.494 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:51.494 #26 NEW cov: 12115 ft: 14604 corp: 14/250b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:51.494 [2024-06-11 13:35:44.299300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a3ede00 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.494 [2024-06-11 13:35:44.299333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.494 [2024-06-11 13:35:44.299399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.495 [2024-06-11 13:35:44.299416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.495 [2024-06-11 13:35:44.299483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.495 [2024-06-11 13:35:44.299499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.495 [2024-06-11 13:35:44.299560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.495 [2024-06-11 13:35:44.299576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.495 #27 NEW cov: 12115 ft: 14706 corp: 15/281b lim: 35 exec/s: 27 rss: 72Mb L: 31/32 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:51.495 [2024-06-11 13:35:44.379305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.495 [2024-06-11 13:35:44.379339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.495 [2024-06-11 13:35:44.379407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:7fb10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.495 [2024-06-11 13:35:44.379424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.495 [2024-06-11 13:35:44.379490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0100001c cdw11:7fb10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.495 [2024-06-11 13:35:44.379518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.753 #28 NEW cov: 12115 ft: 14720 corp: 16/306b lim: 35 exec/s: 28 rss: 72Mb L: 25/32 MS: 1 ShuffleBytes- 00:07:51.753 [2024-06-11 13:35:44.459539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36360a36 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.753 [2024-06-11 13:35:44.459571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.753 [2024-06-11 13:35:44.459640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36361636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.753 [2024-06-11 13:35:44.459657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.753 [2024-06-11 13:35:44.459725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ea363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.753 [2024-06-11 13:35:44.459745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.753 #29 NEW cov: 12115 ft: 14737 corp: 17/331b lim: 35 exec/s: 29 rss: 72Mb L: 25/32 MS: 1 CopyPart- 00:07:51.753 [2024-06-11 13:35:44.509309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00240a01 cdw11:b1980000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.753 [2024-06-11 13:35:44.509342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.753 #30 NEW cov: 12115 ft: 14750 corp: 18/340b lim: 35 exec/s: 30 rss: 72Mb L: 9/32 MS: 1 ChangeByte- 00:07:51.753 [2024-06-11 13:35:44.560245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a3ede00 cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.754 [2024-06-11 13:35:44.560278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.754 [2024-06-11 13:35:44.560348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.754 [2024-06-11 13:35:44.560365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:51.754 [2024-06-11 13:35:44.560432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.754 [2024-06-11 13:35:44.560449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:51.754 [2024-06-11 13:35:44.560515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.754 [2024-06-11 13:35:44.560532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:51.754 [2024-06-11 13:35:44.560597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.754 [2024-06-11 13:35:44.560615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:51.754 #31 NEW cov: 12115 ft: 14862 corp: 19/375b lim: 35 exec/s: 31 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:07:51.754 [2024-06-11 13:35:44.639893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36360a36 cdw11:36a70000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.754 [2024-06-11 13:35:44.639925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:51.754 [2024-06-11 13:35:44.639994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:51.754 [2024-06-11 13:35:44.640011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.012 #32 NEW cov: 12115 ft: 14867 corp: 20/393b lim: 35 exec/s: 32 rss: 72Mb L: 18/35 MS: 1 ChangeBinInt- 00:07:52.012 [2024-06-11 13:35:44.710062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36360a36 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.012 [2024-06-11 13:35:44.710094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.012 [2024-06-11 13:35:44.710164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36361636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.012 [2024-06-11 13:35:44.710181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.012 #33 NEW cov: 12115 ft: 14883 corp: 21/410b lim: 35 exec/s: 33 rss: 72Mb L: 17/35 MS: 1 EraseBytes- 00:07:52.012 [2024-06-11 13:35:44.780519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36360a36 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.012 [2024-06-11 13:35:44.780552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.012 [2024-06-11 13:35:44.780620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.012 [2024-06-11 13:35:44.780637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.012 [2024-06-11 13:35:44.780708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:01003636 cdw11:7fb10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.012 [2024-06-11 13:35:44.780724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.012 #34 NEW cov: 12115 ft: 14886 corp: 22/436b lim: 35 exec/s: 34 rss: 72Mb L: 26/35 MS: 1 ShuffleBytes- 00:07:52.012 [2024-06-11 13:35:44.860720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36360a36 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.012 [2024-06-11 13:35:44.860752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.012 [2024-06-11 13:35:44.860822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.012 [2024-06-11 13:35:44.860840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.012 [2024-06-11 13:35:44.860906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00360136 cdw11:7fb10001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.012 [2024-06-11 13:35:44.860922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.012 #35 NEW cov: 12115 ft: 14934 corp: 23/462b lim: 35 exec/s: 35 rss: 72Mb L: 26/35 MS: 1 ShuffleBytes- 00:07:52.012 [2024-06-11 13:35:44.910666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36360a36 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.012 [2024-06-11 13:35:44.910699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.012 [2024-06-11 13:35:44.910765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36361636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.012 [2024-06-11 13:35:44.910782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.270 #36 NEW cov: 12115 ft: 14966 corp: 24/479b lim: 35 exec/s: 36 rss: 73Mb L: 17/35 MS: 1 ChangeBit- 00:07:52.270 [2024-06-11 13:35:44.980692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00240a01 cdw11:b1200000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.270 [2024-06-11 13:35:44.980727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.270 #37 NEW cov: 12115 ft: 15007 corp: 25/488b lim: 35 exec/s: 37 rss: 73Mb L: 9/35 MS: 1 CMP- DE: " \000\000\000"- 00:07:52.270 [2024-06-11 13:35:45.061538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:34340a0a cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.270 [2024-06-11 13:35:45.061571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.270 [2024-06-11 13:35:45.061638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:34343434 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.270 [2024-06-11 13:35:45.061656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.270 [2024-06-11 13:35:45.061724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:34343434 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.270 [2024-06-11 13:35:45.061741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.270 [2024-06-11 13:35:45.061806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:34343434 cdw11:34340000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.270 [2024-06-11 13:35:45.061823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.270 #39 NEW cov: 12115 ft: 15054 corp: 26/522b lim: 35 exec/s: 39 rss: 73Mb L: 34/35 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:52.270 [2024-06-11 13:35:45.111298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:007f0a01 cdw11:b1980000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.270 [2024-06-11 13:35:45.111330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.270 [2024-06-11 13:35:45.111395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:007f1c01 cdw11:b1980000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.270 [2024-06-11 13:35:45.111412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.270 #40 NEW cov: 12115 ft: 15067 corp: 27/539b lim: 35 exec/s: 40 rss: 73Mb L: 17/35 MS: 1 CopyPart- 00:07:52.270 [2024-06-11 13:35:45.161436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:3e3e3e3e cdw11:3e3e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.270 [2024-06-11 13:35:45.161469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.270 [2024-06-11 13:35:45.161536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:3e203e3e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.270 [2024-06-11 13:35:45.161554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.529 #41 NEW cov: 12115 ft: 15088 corp: 28/557b lim: 35 exec/s: 41 rss: 73Mb L: 18/35 MS: 1 PersAutoDict- DE: " \000\000\000"- 00:07:52.529 [2024-06-11 13:35:45.241860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:36040a36 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.529 [2024-06-11 13:35:45.241895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.529 [2024-06-11 13:35:45.241960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:36363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.529 [2024-06-11 13:35:45.241977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.529 [2024-06-11 13:35:45.242043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:36003601 cdw11:367f0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.529 [2024-06-11 13:35:45.242060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.529 #42 NEW cov: 12115 ft: 15092 corp: 29/584b lim: 35 exec/s: 42 rss: 73Mb L: 27/35 MS: 1 InsertByte- 00:07:52.529 [2024-06-11 13:35:45.322521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.529 [2024-06-11 13:35:45.322556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:52.529 [2024-06-11 13:35:45.322621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.529 [2024-06-11 13:35:45.322643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:52.529 [2024-06-11 13:35:45.322711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffff3636 cdw11:ff360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.529 [2024-06-11 13:35:45.322729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:52.529 [2024-06-11 13:35:45.322798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:16363636 cdw11:36360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.529 [2024-06-11 13:35:45.322819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:52.529 [2024-06-11 13:35:45.322892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:36363636 cdw11:ea360000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:52.529 [2024-06-11 13:35:45.322911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:52.530 #43 NEW cov: 12115 ft: 15114 corp: 30/619b lim: 35 exec/s: 21 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:07:52.530 #43 DONE cov: 12115 ft: 15114 corp: 30/619b lim: 35 exec/s: 21 rss: 73Mb 00:07:52.530 ###### Recommended dictionary. ###### 00:07:52.530 "\001\000" # Uses: 1 00:07:52.530 "\001\000\177\261\230\000\034," # Uses: 1 00:07:52.530 " \000\000\000" # Uses: 1 00:07:52.530 ###### End of recommended dictionary. ###### 00:07:52.530 Done 43 runs in 2 second(s) 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4405 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:52.789 13:35:45 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:07:52.789 [2024-06-11 13:35:45.566007] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:52.789 [2024-06-11 13:35:45.566087] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3438977 ] 00:07:52.789 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.048 [2024-06-11 13:35:45.876673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.307 [2024-06-11 13:35:45.988990] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.307 [2024-06-11 13:35:46.052967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.307 [2024-06-11 13:35:46.069341] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:53.307 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.307 INFO: Seed: 3122663177 00:07:53.307 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:07:53.307 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:07:53.307 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:53.307 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.307 #2 INITED exec/s: 0 rss: 64Mb 00:07:53.307 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.307 This may also happen if the target rejected all inputs we tried so far 00:07:53.307 [2024-06-11 13:35:46.125948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.307 [2024-06-11 13:35:46.125986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.307 [2024-06-11 13:35:46.126055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.307 [2024-06-11 13:35:46.126073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.307 [2024-06-11 13:35:46.126137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.307 [2024-06-11 13:35:46.126154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.307 [2024-06-11 13:35:46.126218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.307 [2024-06-11 13:35:46.126235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.307 [2024-06-11 13:35:46.126302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.307 [2024-06-11 13:35:46.126319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:53.567 NEW_FUNC[1/687]: 0x48b180 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:53.567 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:53.567 #7 NEW cov: 11882 ft: 11879 corp: 2/46b lim: 45 exec/s: 0 rss: 71Mb L: 45/45 MS: 5 ShuffleBytes-CopyPart-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:07:53.567 [2024-06-11 13:35:46.346356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.346399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.346464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.346482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.346550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.346567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.346629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.346646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.346708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.346724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:53.567 #8 NEW cov: 12012 ft: 12341 corp: 3/91b lim: 45 exec/s: 0 rss: 71Mb L: 45/45 MS: 1 CopyPart- 00:07:53.567 [2024-06-11 13:35:46.426559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.426596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.426661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.426678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.426744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.426760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.426822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.426838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.426899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.426916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:53.567 #9 NEW cov: 12018 ft: 12703 corp: 4/136b lim: 45 exec/s: 0 rss: 71Mb L: 45/45 MS: 1 ChangeBinInt- 00:07:53.567 [2024-06-11 13:35:46.476711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.476745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.476810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.476828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.476893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.476910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.476972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.476994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.567 [2024-06-11 13:35:46.477057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.567 [2024-06-11 13:35:46.477073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:53.826 #10 NEW cov: 12103 ft: 12943 corp: 5/181b lim: 45 exec/s: 0 rss: 71Mb L: 45/45 MS: 1 CopyPart- 00:07:53.826 [2024-06-11 13:35:46.556900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.556933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.826 [2024-06-11 13:35:46.557000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.557017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.826 [2024-06-11 13:35:46.557083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.557100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.826 [2024-06-11 13:35:46.557162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.557178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.826 [2024-06-11 13:35:46.557242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.557258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:53.826 #11 NEW cov: 12103 ft: 13016 corp: 6/226b lim: 45 exec/s: 0 rss: 71Mb L: 45/45 MS: 1 ChangeBinInt- 00:07:53.826 [2024-06-11 13:35:46.636724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c9c90ac9 cdw11:c9c90006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.636759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.826 [2024-06-11 13:35:46.636826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c9c9c9c9 cdw11:c9c90006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.636843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.826 [2024-06-11 13:35:46.636908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c9c9c9c9 cdw11:c9c90006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.636925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.826 #13 NEW cov: 12103 ft: 13631 corp: 7/261b lim: 45 exec/s: 0 rss: 71Mb L: 35/45 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:53.826 [2024-06-11 13:35:46.697145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3e5d0efb cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.697179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:53.826 [2024-06-11 13:35:46.697218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.697238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:53.826 [2024-06-11 13:35:46.697302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.697318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:53.826 [2024-06-11 13:35:46.697380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.826 [2024-06-11 13:35:46.697395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:53.826 #17 NEW cov: 12103 ft: 13741 corp: 8/302b lim: 45 exec/s: 0 rss: 72Mb L: 41/45 MS: 4 ChangeByte-CopyPart-ChangeBinInt-CrossOver- 00:07:54.085 [2024-06-11 13:35:46.746845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.746879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.746956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.746979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.085 #18 NEW cov: 12103 ft: 13967 corp: 9/328b lim: 45 exec/s: 0 rss: 72Mb L: 26/45 MS: 1 EraseBytes- 00:07:54.085 [2024-06-11 13:35:46.827505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c9c90ac9 cdw11:c9c90006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.827539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.827616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c9c9c9c9 cdw11:c9c90000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.827647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.827725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c9c900c9 cdw11:c9c90006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.827748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.827823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:c9c9c9c9 cdw11:c9c90006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.827846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.085 #19 NEW cov: 12103 ft: 13980 corp: 10/367b lim: 45 exec/s: 0 rss: 72Mb L: 39/45 MS: 1 CMP- DE: "\001\002\000\000"- 00:07:54.085 [2024-06-11 13:35:46.907922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.907955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.908033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.908058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.908137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.908166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.908247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.908271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.908350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.908373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.085 #20 NEW cov: 12103 ft: 14006 corp: 11/412b lim: 45 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 CrossOver- 00:07:54.085 [2024-06-11 13:35:46.958072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.958105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.958183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.958212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.958291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.958315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.958393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0000ff2d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.958416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.085 [2024-06-11 13:35:46.958491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.085 [2024-06-11 13:35:46.958514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.344 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:54.344 #21 NEW cov: 12126 ft: 14064 corp: 12/457b lim: 45 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 ChangeBinInt- 00:07:54.345 [2024-06-11 13:35:47.038291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3e5d0efb cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.038324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.038403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.038426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.038505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff01ffff cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.038529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.038608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.038636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.038717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.038741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.345 #22 NEW cov: 12126 ft: 14072 corp: 13/502b lim: 45 exec/s: 0 rss: 72Mb L: 45/45 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:07:54.345 [2024-06-11 13:35:47.118286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3e5d0efb cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.118319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.118394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffbfffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.118418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.118495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.118518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.118593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.118616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.345 #23 NEW cov: 12126 ft: 14090 corp: 14/543b lim: 45 exec/s: 23 rss: 72Mb L: 41/45 MS: 1 ChangeBit- 00:07:54.345 [2024-06-11 13:35:47.168628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.168660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.168738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.168762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.168839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.168862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.168940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.168963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.169038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.169062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.345 #24 NEW cov: 12126 ft: 14117 corp: 15/588b lim: 45 exec/s: 24 rss: 72Mb L: 45/45 MS: 1 CopyPart- 00:07:54.345 [2024-06-11 13:35:47.228708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3eff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.228745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.228824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.228848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.228924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.228948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.229027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.229050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.345 [2024-06-11 13:35:47.229128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.345 [2024-06-11 13:35:47.229151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.604 #25 NEW cov: 12126 ft: 14141 corp: 16/633b lim: 45 exec/s: 25 rss: 72Mb L: 45/45 MS: 1 CopyPart- 00:07:54.604 [2024-06-11 13:35:47.278912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.278944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.279019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.279043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.279121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.279144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.279229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0000ff2d cdw11:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.279254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.279332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.279355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.604 #26 NEW cov: 12126 ft: 14182 corp: 17/678b lim: 45 exec/s: 26 rss: 72Mb L: 45/45 MS: 1 ChangeBit- 00:07:54.604 [2024-06-11 13:35:47.359204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:365d0efb cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.359237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.359315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.359345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.359422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff01ffff cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.359445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.359522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.359545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.359623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.359645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.604 #27 NEW cov: 12126 ft: 14204 corp: 18/723b lim: 45 exec/s: 27 rss: 72Mb L: 45/45 MS: 1 ChangeBit- 00:07:54.604 [2024-06-11 13:35:47.429136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:c9c90ac9 cdw11:c9c90006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.429168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.429253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c9c9c9c9 cdw11:c9c90006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.429278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.429357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:c9c9c9c9 cdw11:c95b0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.429381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.429460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:c9c9c9c9 cdw11:c9c90006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.429483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.604 #28 NEW cov: 12126 ft: 14254 corp: 19/759b lim: 45 exec/s: 28 rss: 72Mb L: 36/45 MS: 1 InsertByte- 00:07:54.604 [2024-06-11 13:35:47.479520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.479553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.479630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.479654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.479730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.479753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.479830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.479854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.604 [2024-06-11 13:35:47.479937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.604 [2024-06-11 13:35:47.479961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.604 #29 NEW cov: 12126 ft: 14284 corp: 20/804b lim: 45 exec/s: 29 rss: 72Mb L: 45/45 MS: 1 CrossOver- 00:07:54.863 [2024-06-11 13:35:47.529734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.863 [2024-06-11 13:35:47.529766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.863 [2024-06-11 13:35:47.529846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.863 [2024-06-11 13:35:47.529870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.863 [2024-06-11 13:35:47.529947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.863 [2024-06-11 13:35:47.529971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.863 [2024-06-11 13:35:47.530048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.530071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.530148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.530171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.864 #30 NEW cov: 12126 ft: 14319 corp: 21/849b lim: 45 exec/s: 30 rss: 72Mb L: 45/45 MS: 1 CopyPart- 00:07:54.864 [2024-06-11 13:35:47.609933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0000c5a2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.609966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.610044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.610068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.610144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.610167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.610261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.610287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.610364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.610388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.864 #31 NEW cov: 12126 ft: 14360 corp: 22/894b lim: 45 exec/s: 31 rss: 72Mb L: 45/45 MS: 1 ChangeBinInt- 00:07:54.864 [2024-06-11 13:35:47.659859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:fb3e0a0e cdw11:5dff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.659892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.659970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.659993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.660069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.660092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.660168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.660192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.864 #32 NEW cov: 12126 ft: 14375 corp: 23/934b lim: 45 exec/s: 32 rss: 72Mb L: 40/45 MS: 1 CrossOver- 00:07:54.864 [2024-06-11 13:35:47.710237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.710269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.710348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.710373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.710451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.710474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.710552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.710575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.710655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.710679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:54.864 #33 NEW cov: 12126 ft: 14398 corp: 24/979b lim: 45 exec/s: 33 rss: 72Mb L: 45/45 MS: 1 CopyPart- 00:07:54.864 [2024-06-11 13:35:47.760377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.760409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.760485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.760509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.760585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.760613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.760690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.760717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:54.864 [2024-06-11 13:35:47.760792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.864 [2024-06-11 13:35:47.760815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.123 #34 NEW cov: 12126 ft: 14405 corp: 25/1024b lim: 45 exec/s: 34 rss: 72Mb L: 45/45 MS: 1 CrossOver- 00:07:55.123 [2024-06-11 13:35:47.830580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.830615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.830695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffbfff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.830719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.830794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.830817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.830893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.830916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.830992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.831016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.123 #35 NEW cov: 12126 ft: 14410 corp: 26/1069b lim: 45 exec/s: 35 rss: 72Mb L: 45/45 MS: 1 ChangeBit- 00:07:55.123 [2024-06-11 13:35:47.910785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.910817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.910893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.910917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.910993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.911016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.911092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.911121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.911204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.911228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.123 #36 NEW cov: 12126 ft: 14413 corp: 27/1114b lim: 45 exec/s: 36 rss: 72Mb L: 45/45 MS: 1 CrossOver- 00:07:55.123 [2024-06-11 13:35:47.981063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3e5d0efb cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.981097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.981175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.981206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.981286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.981309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.981386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.981409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:47.981486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:47.981509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.123 #37 NEW cov: 12126 ft: 14476 corp: 28/1159b lim: 45 exec/s: 37 rss: 72Mb L: 45/45 MS: 1 CopyPart- 00:07:55.123 [2024-06-11 13:35:48.031161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff3e5d cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:48.031195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:48.031282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:0102ffff cdw11:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:48.031305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:48.031384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.123 [2024-06-11 13:35:48.031407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.123 [2024-06-11 13:35:48.031485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.124 [2024-06-11 13:35:48.031509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:55.124 [2024-06-11 13:35:48.031590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.124 [2024-06-11 13:35:48.031613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:55.383 #38 NEW cov: 12126 ft: 14479 corp: 29/1204b lim: 45 exec/s: 38 rss: 72Mb L: 45/45 MS: 1 PersAutoDict- DE: "\001\002\000\000"- 00:07:55.383 [2024-06-11 13:35:48.080867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:3e5d0efb cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.383 [2024-06-11 13:35:48.080900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:55.383 [2024-06-11 13:35:48.080978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.383 [2024-06-11 13:35:48.081002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:55.383 [2024-06-11 13:35:48.081079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.383 [2024-06-11 13:35:48.081102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:55.383 #39 NEW cov: 12126 ft: 14514 corp: 30/1239b lim: 45 exec/s: 19 rss: 72Mb L: 35/45 MS: 1 EraseBytes- 00:07:55.383 #39 DONE cov: 12126 ft: 14514 corp: 30/1239b lim: 45 exec/s: 19 rss: 72Mb 00:07:55.383 ###### Recommended dictionary. ###### 00:07:55.383 "\001\002\000\000" # Uses: 2 00:07:55.383 ###### End of recommended dictionary. ###### 00:07:55.383 Done 39 runs in 2 second(s) 00:07:55.383 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:07:55.383 13:35:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:55.383 13:35:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:55.383 13:35:48 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:55.383 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:55.383 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:55.383 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4406 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:55.642 13:35:48 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:07:55.642 [2024-06-11 13:35:48.335119] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:55.642 [2024-06-11 13:35:48.335217] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439422 ] 00:07:55.642 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.900 [2024-06-11 13:35:48.654444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.900 [2024-06-11 13:35:48.763090] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.159 [2024-06-11 13:35:48.827227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.159 [2024-06-11 13:35:48.843602] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:56.159 INFO: Running with entropic power schedule (0xFF, 100). 00:07:56.159 INFO: Seed: 1600693486 00:07:56.159 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:07:56.159 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:07:56.159 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:56.159 INFO: A corpus is not provided, starting from an empty corpus 00:07:56.159 #2 INITED exec/s: 0 rss: 65Mb 00:07:56.159 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:56.159 This may also happen if the target rejected all inputs we tried so far 00:07:56.159 [2024-06-11 13:35:48.892910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003a0a cdw11:00000000 00:07:56.159 [2024-06-11 13:35:48.892936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.159 NEW_FUNC[1/685]: 0x48d990 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:56.159 NEW_FUNC[2/685]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:56.159 #5 NEW cov: 11799 ft: 11799 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 3 ChangeBinInt-ChangeByte-CrossOver- 00:07:56.418 [2024-06-11 13:35:49.083344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000320a cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.083377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.418 #6 NEW cov: 11929 ft: 12363 corp: 3/5b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeBit- 00:07:56.418 [2024-06-11 13:35:49.133982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.134006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.134056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e5c9 cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.134068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.134119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000090c0 cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.134130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.134181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e47c cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.134192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.134248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000320a cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.134259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.418 #7 NEW cov: 11935 ft: 12972 corp: 4/15b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CMP- DE: "\000\003\345\311\220\300\344|"- 00:07:56.418 [2024-06-11 13:35:49.184091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000320a cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.184116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.184168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.184179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.184230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e5c9 cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.184242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.184290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000090c0 cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.184300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.184349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000e47c cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.184360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.418 #8 NEW cov: 12020 ft: 13273 corp: 5/25b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 CrossOver- 00:07:56.418 [2024-06-11 13:35:49.224255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.224277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.224325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.224336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.224384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e5c9 cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.224394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.224442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000090c0 cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.224452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.224501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000e47c cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.224512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.418 #9 NEW cov: 12020 ft: 13359 corp: 6/35b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:07:56.418 [2024-06-11 13:35:49.274106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.274128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.274180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.274191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.418 [2024-06-11 13:35:49.274243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e5e4 cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.274255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.418 #10 NEW cov: 12020 ft: 13547 corp: 7/42b lim: 10 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 EraseBytes- 00:07:56.418 [2024-06-11 13:35:49.323913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003a0a cdw11:00000000 00:07:56.418 [2024-06-11 13:35:49.323937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.677 #11 NEW cov: 12020 ft: 13638 corp: 8/45b lim: 10 exec/s: 0 rss: 72Mb L: 3/10 MS: 1 CopyPart- 00:07:56.677 [2024-06-11 13:35:49.364066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000094e4 cdw11:00000000 00:07:56.677 [2024-06-11 13:35:49.364090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.677 #13 NEW cov: 12020 ft: 13688 corp: 9/47b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 2 CrossOver-InsertByte- 00:07:56.678 [2024-06-11 13:35:49.404493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002821 cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.404516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.678 [2024-06-11 13:35:49.404568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.404579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.678 [2024-06-11 13:35:49.404631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e5e4 cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.404642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.678 #14 NEW cov: 12020 ft: 13809 corp: 10/54b lim: 10 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 ChangeByte- 00:07:56.678 [2024-06-11 13:35:49.454748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.454771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.678 [2024-06-11 13:35:49.454821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000000c9 cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.454832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.678 [2024-06-11 13:35:49.454881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000090c0 cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.454892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.678 [2024-06-11 13:35:49.454939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e47c cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.454949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.678 #15 NEW cov: 12020 ft: 13860 corp: 11/62b lim: 10 exec/s: 0 rss: 72Mb L: 8/10 MS: 1 EraseBytes- 00:07:56.678 [2024-06-11 13:35:49.494432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000400 cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.494456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.678 #18 NEW cov: 12020 ft: 13948 corp: 12/65b lim: 10 exec/s: 0 rss: 72Mb L: 3/10 MS: 3 CopyPart-ShuffleBytes-CMP- DE: "\004\000"- 00:07:56.678 [2024-06-11 13:35:49.535116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000090 cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.535139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.678 [2024-06-11 13:35:49.535190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e47c cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.535209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.678 [2024-06-11 13:35:49.535261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c0e5 cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.535271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.678 [2024-06-11 13:35:49.535322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000332 cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.535333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.678 [2024-06-11 13:35:49.535383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000c90a cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.535394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.678 #19 NEW cov: 12020 ft: 13955 corp: 13/75b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:56.678 [2024-06-11 13:35:49.584729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003d3d cdw11:00000000 00:07:56.678 [2024-06-11 13:35:49.584752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.937 #22 NEW cov: 12020 ft: 13995 corp: 14/77b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 3 ChangeByte-CrossOver-CopyPart- 00:07:56.937 [2024-06-11 13:35:49.624854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000380a cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.624876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.937 #23 NEW cov: 12020 ft: 14009 corp: 15/79b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:56.937 [2024-06-11 13:35:49.665084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.665106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.937 [2024-06-11 13:35:49.665157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e5c9 cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.665168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.937 #24 NEW cov: 12020 ft: 14175 corp: 16/84b lim: 10 exec/s: 0 rss: 72Mb L: 5/10 MS: 1 EraseBytes- 00:07:56.937 [2024-06-11 13:35:49.705354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.705375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.937 [2024-06-11 13:35:49.705428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.705439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.937 [2024-06-11 13:35:49.705491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e5e6 cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.705502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.937 #25 NEW cov: 12020 ft: 14213 corp: 17/91b lim: 10 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 ChangeBit- 00:07:56.937 [2024-06-11 13:35:49.745179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003b0a cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.745205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.937 #26 NEW cov: 12020 ft: 14289 corp: 18/93b lim: 10 exec/s: 0 rss: 72Mb L: 2/10 MS: 1 ChangeBit- 00:07:56.937 [2024-06-11 13:35:49.785895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000320a cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.785918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.937 [2024-06-11 13:35:49.785968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.785980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.937 [2024-06-11 13:35:49.786025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e5c9 cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.786036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.937 [2024-06-11 13:35:49.786085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:000090c0 cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.786096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:56.937 [2024-06-11 13:35:49.786143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000e47c cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.786154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:56.937 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:56.937 #27 NEW cov: 12043 ft: 14331 corp: 19/103b lim: 10 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:56.937 [2024-06-11 13:35:49.825695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.825717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:56.937 [2024-06-11 13:35:49.825771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.825783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:56.937 [2024-06-11 13:35:49.825834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000dfe4 cdw11:00000000 00:07:56.937 [2024-06-11 13:35:49.825845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:56.937 #28 NEW cov: 12043 ft: 14353 corp: 20/110b lim: 10 exec/s: 0 rss: 72Mb L: 7/10 MS: 1 ChangeBinInt- 00:07:57.196 [2024-06-11 13:35:49.865589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003f3d cdw11:00000000 00:07:57.196 [2024-06-11 13:35:49.865611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.196 #29 NEW cov: 12043 ft: 14375 corp: 21/112b lim: 10 exec/s: 29 rss: 72Mb L: 2/10 MS: 1 ChangeBit- 00:07:57.196 [2024-06-11 13:35:49.915984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003bff cdw11:00000000 00:07:57.196 [2024-06-11 13:35:49.916006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:49.916061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.196 [2024-06-11 13:35:49.916072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:49.916124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.196 [2024-06-11 13:35:49.916135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.196 #32 NEW cov: 12043 ft: 14383 corp: 22/119b lim: 10 exec/s: 32 rss: 73Mb L: 7/10 MS: 3 EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:57.196 [2024-06-11 13:35:49.966359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003200 cdw11:00000000 00:07:57.196 [2024-06-11 13:35:49.966381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:49.966431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000003e5 cdw11:00000000 00:07:57.196 [2024-06-11 13:35:49.966442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:49.966491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c990 cdw11:00000000 00:07:57.196 [2024-06-11 13:35:49.966502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:49.966551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000c0e4 cdw11:00000000 00:07:57.196 [2024-06-11 13:35:49.966562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:49.966612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00007c0a cdw11:00000000 00:07:57.196 [2024-06-11 13:35:49.966623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.196 #33 NEW cov: 12043 ft: 14407 corp: 23/129b lim: 10 exec/s: 33 rss: 73Mb L: 10/10 MS: 1 PersAutoDict- DE: "\000\003\345\311\220\300\344|"- 00:07:57.196 [2024-06-11 13:35:50.006493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003f5e cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.006516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:50.006569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00005e5e cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.006581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:50.006633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005e5e cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.006644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:50.006693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00005e5e cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.006704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:50.006753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00005e3d cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.006764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.196 #34 NEW cov: 12043 ft: 14429 corp: 24/139b lim: 10 exec/s: 34 rss: 73Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:57.196 [2024-06-11 13:35:50.056392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000300 cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.056421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:50.056474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000028e4 cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.056485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:50.056541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e521 cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.056553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.196 #35 NEW cov: 12043 ft: 14444 corp: 25/146b lim: 10 exec/s: 35 rss: 73Mb L: 7/10 MS: 1 ShuffleBytes- 00:07:57.196 [2024-06-11 13:35:50.106695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000300 cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.106725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:50.106777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000028e4 cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.106789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:50.106840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000e521 cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.106851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.196 [2024-06-11 13:35:50.106900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000f77c cdw11:00000000 00:07:57.196 [2024-06-11 13:35:50.106911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.455 #36 NEW cov: 12043 ft: 14453 corp: 26/154b lim: 10 exec/s: 36 rss: 73Mb L: 8/10 MS: 1 InsertByte- 00:07:57.455 [2024-06-11 13:35:50.156383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003d21 cdw11:00000000 00:07:57.455 [2024-06-11 13:35:50.156407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.455 #37 NEW cov: 12043 ft: 14475 corp: 27/156b lim: 10 exec/s: 37 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:07:57.455 [2024-06-11 13:35:50.196486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000213a cdw11:00000000 00:07:57.455 [2024-06-11 13:35:50.196507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.455 #40 NEW cov: 12043 ft: 14484 corp: 28/158b lim: 10 exec/s: 40 rss: 73Mb L: 2/10 MS: 3 EraseBytes-ShuffleBytes-InsertByte- 00:07:57.455 [2024-06-11 13:35:50.246631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003df2 cdw11:00000000 00:07:57.455 [2024-06-11 13:35:50.246653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.455 #41 NEW cov: 12043 ft: 14514 corp: 29/160b lim: 10 exec/s: 41 rss: 73Mb L: 2/10 MS: 1 ChangeByte- 00:07:57.455 [2024-06-11 13:35:50.287310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000300 cdw11:00000000 00:07:57.455 [2024-06-11 13:35:50.287333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.455 [2024-06-11 13:35:50.287381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:000032e5 cdw11:00000000 00:07:57.455 [2024-06-11 13:35:50.287393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.455 [2024-06-11 13:35:50.287441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c990 cdw11:00000000 00:07:57.455 [2024-06-11 13:35:50.287452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.455 [2024-06-11 13:35:50.287501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000c0e4 cdw11:00000000 00:07:57.455 [2024-06-11 13:35:50.287516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.455 [2024-06-11 13:35:50.287565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00007c0a cdw11:00000000 00:07:57.455 [2024-06-11 13:35:50.287576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.455 #42 NEW cov: 12043 ft: 14534 corp: 30/170b lim: 10 exec/s: 42 rss: 73Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:57.455 [2024-06-11 13:35:50.336884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000393d cdw11:00000000 00:07:57.455 [2024-06-11 13:35:50.336907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.455 #43 NEW cov: 12043 ft: 14535 corp: 31/172b lim: 10 exec/s: 43 rss: 73Mb L: 2/10 MS: 1 ChangeBit- 00:07:57.714 [2024-06-11 13:35:50.377023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003b0a cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.377046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.714 #44 NEW cov: 12043 ft: 14550 corp: 32/174b lim: 10 exec/s: 44 rss: 73Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:57.714 [2024-06-11 13:35:50.417740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.417762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.714 [2024-06-11 13:35:50.417811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e47c cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.417823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.714 [2024-06-11 13:35:50.417875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000c0e5 cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.417886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.714 [2024-06-11 13:35:50.417934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000332 cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.417945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.714 [2024-06-11 13:35:50.417994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000c90a cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.418005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.714 #45 NEW cov: 12043 ft: 14553 corp: 33/184b lim: 10 exec/s: 45 rss: 73Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:57.714 [2024-06-11 13:35:50.467315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000320e cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.467337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.714 #46 NEW cov: 12043 ft: 14568 corp: 34/186b lim: 10 exec/s: 46 rss: 73Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:57.714 [2024-06-11 13:35:50.507613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000003e5 cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.507636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.714 [2024-06-11 13:35:50.507687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000c90a cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.507697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.714 #47 NEW cov: 12043 ft: 14582 corp: 35/190b lim: 10 exec/s: 47 rss: 74Mb L: 4/10 MS: 1 EraseBytes- 00:07:57.714 [2024-06-11 13:35:50.557873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003bff cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.557896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.714 [2024-06-11 13:35:50.557947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.557959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.714 [2024-06-11 13:35:50.558011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007fff cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.558022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.714 #48 NEW cov: 12043 ft: 14595 corp: 36/197b lim: 10 exec/s: 48 rss: 74Mb L: 7/10 MS: 1 ChangeBit- 00:07:57.714 [2024-06-11 13:35:50.608161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002800 cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.608182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.714 [2024-06-11 13:35:50.608238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a00 cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.608249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.714 [2024-06-11 13:35:50.608300] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000003e5 cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.608311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.714 [2024-06-11 13:35:50.608361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e47c cdw11:00000000 00:07:57.714 [2024-06-11 13:35:50.608372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.973 #49 NEW cov: 12043 ft: 14602 corp: 37/205b lim: 10 exec/s: 49 rss: 74Mb L: 8/10 MS: 1 CrossOver- 00:07:57.973 [2024-06-11 13:35:50.647835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000212d cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.647858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.973 #51 NEW cov: 12043 ft: 14632 corp: 38/207b lim: 10 exec/s: 51 rss: 74Mb L: 2/10 MS: 2 CrossOver-InsertByte- 00:07:57.973 [2024-06-11 13:35:50.698210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003bff cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.698235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.698289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.698300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.698351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007fff cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.698363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.973 #52 NEW cov: 12043 ft: 14662 corp: 39/214b lim: 10 exec/s: 52 rss: 74Mb L: 7/10 MS: 1 CopyPart- 00:07:57.973 [2024-06-11 13:35:50.748576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.748603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.748654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e5c9 cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.748665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.748715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000090c0 cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.748726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.748776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e47c cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.748788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.973 #53 NEW cov: 12043 ft: 14672 corp: 40/222b lim: 10 exec/s: 53 rss: 74Mb L: 8/10 MS: 1 PersAutoDict- DE: "\000\003\345\311\220\300\344|"- 00:07:57.973 [2024-06-11 13:35:50.798824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.798847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.798898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000e5c9 cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.798909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.798959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000090c0 cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.798970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.799021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000e4e4 cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.799032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.799084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00007c7c cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.799095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:57.973 #54 NEW cov: 12043 ft: 14693 corp: 41/232b lim: 10 exec/s: 54 rss: 74Mb L: 10/10 MS: 1 CopyPart- 00:07:57.973 [2024-06-11 13:35:50.848827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003bff cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.848850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.848900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.848911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.848960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00007fff cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.848971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:57.973 [2024-06-11 13:35:50.849019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff31 cdw11:00000000 00:07:57.973 [2024-06-11 13:35:50.849030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:57.973 #55 NEW cov: 12043 ft: 14701 corp: 42/240b lim: 10 exec/s: 27 rss: 74Mb L: 8/10 MS: 1 InsertByte- 00:07:57.973 #55 DONE cov: 12043 ft: 14701 corp: 42/240b lim: 10 exec/s: 27 rss: 74Mb 00:07:57.973 ###### Recommended dictionary. ###### 00:07:57.973 "\000\003\345\311\220\300\344|" # Uses: 2 00:07:57.973 "\004\000" # Uses: 0 00:07:57.974 ###### End of recommended dictionary. ###### 00:07:57.974 Done 55 runs in 2 second(s) 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4407 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:07:58.233 13:35:51 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:07:58.233 [2024-06-11 13:35:51.078101] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:07:58.233 [2024-06-11 13:35:51.078177] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439854 ] 00:07:58.233 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.492 [2024-06-11 13:35:51.397403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.750 [2024-06-11 13:35:51.508894] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.750 [2024-06-11 13:35:51.572818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.750 [2024-06-11 13:35:51.589179] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:58.750 INFO: Running with entropic power schedule (0xFF, 100). 00:07:58.750 INFO: Seed: 50723054 00:07:58.750 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:07:58.750 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:07:58.750 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:58.750 INFO: A corpus is not provided, starting from an empty corpus 00:07:58.750 #2 INITED exec/s: 0 rss: 64Mb 00:07:58.750 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:58.750 This may also happen if the target rejected all inputs we tried so far 00:07:58.750 [2024-06-11 13:35:51.638010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:07:58.750 [2024-06-11 13:35:51.638050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.009 NEW_FUNC[1/685]: 0x48e380 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:59.009 NEW_FUNC[2/685]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:59.009 #3 NEW cov: 11799 ft: 11797 corp: 2/3b lim: 10 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:07:59.009 [2024-06-11 13:35:51.848850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.010 [2024-06-11 13:35:51.848896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.010 [2024-06-11 13:35:51.848974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.010 [2024-06-11 13:35:51.848998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.010 #5 NEW cov: 11929 ft: 12625 corp: 3/8b lim: 10 exec/s: 0 rss: 72Mb L: 5/5 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:59.010 [2024-06-11 13:35:51.908797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000290a cdw11:00000000 00:07:59.010 [2024-06-11 13:35:51.908833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.269 #6 NEW cov: 11935 ft: 12770 corp: 4/10b lim: 10 exec/s: 0 rss: 72Mb L: 2/5 MS: 1 ChangeBit- 00:07:59.269 [2024-06-11 13:35:51.979479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005df7 cdw11:00000000 00:07:59.269 [2024-06-11 13:35:51.979513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.269 [2024-06-11 13:35:51.979592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:07:59.269 [2024-06-11 13:35:51.979617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.269 [2024-06-11 13:35:51.979698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:07:59.269 [2024-06-11 13:35:51.979722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.269 [2024-06-11 13:35:51.979803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:07:59.269 [2024-06-11 13:35:51.979829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.269 #8 NEW cov: 12020 ft: 13239 corp: 5/19b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:59.269 [2024-06-11 13:35:52.029070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002808 cdw11:00000000 00:07:59.269 [2024-06-11 13:35:52.029104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.269 #9 NEW cov: 12020 ft: 13349 corp: 6/21b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 1 ChangeBit- 00:07:59.269 [2024-06-11 13:35:52.079462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:59.269 [2024-06-11 13:35:52.079497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.269 [2024-06-11 13:35:52.079574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000f8 cdw11:00000000 00:07:59.269 [2024-06-11 13:35:52.079597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.269 #10 NEW cov: 12020 ft: 13421 corp: 7/26b lim: 10 exec/s: 0 rss: 72Mb L: 5/9 MS: 1 ChangeBinInt- 00:07:59.269 [2024-06-11 13:35:52.159956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a308 cdw11:00000000 00:07:59.269 [2024-06-11 13:35:52.159990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.269 [2024-06-11 13:35:52.160065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000808 cdw11:00000000 00:07:59.269 [2024-06-11 13:35:52.160090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.269 [2024-06-11 13:35:52.160168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000808 cdw11:00000000 00:07:59.269 [2024-06-11 13:35:52.160192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.269 [2024-06-11 13:35:52.160275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000080b cdw11:00000000 00:07:59.269 [2024-06-11 13:35:52.160299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.528 #11 NEW cov: 12020 ft: 13561 corp: 8/35b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:59.528 [2024-06-11 13:35:52.240164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a318 cdw11:00000000 00:07:59.528 [2024-06-11 13:35:52.240206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.528 [2024-06-11 13:35:52.240288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000808 cdw11:00000000 00:07:59.528 [2024-06-11 13:35:52.240313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.528 [2024-06-11 13:35:52.240392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000808 cdw11:00000000 00:07:59.528 [2024-06-11 13:35:52.240416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.528 [2024-06-11 13:35:52.240496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000080b cdw11:00000000 00:07:59.528 [2024-06-11 13:35:52.240520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.528 #12 NEW cov: 12020 ft: 13564 corp: 9/44b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:07:59.528 [2024-06-11 13:35:52.320417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005df7 cdw11:00000000 00:07:59.528 [2024-06-11 13:35:52.320450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.528 [2024-06-11 13:35:52.320529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7e7 cdw11:00000000 00:07:59.528 [2024-06-11 13:35:52.320553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.528 [2024-06-11 13:35:52.320633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:07:59.528 [2024-06-11 13:35:52.320657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:59.528 [2024-06-11 13:35:52.320734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:07:59.528 [2024-06-11 13:35:52.320757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:59.528 #13 NEW cov: 12020 ft: 13592 corp: 10/53b lim: 10 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:07:59.528 [2024-06-11 13:35:52.370033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000c608 cdw11:00000000 00:07:59.528 [2024-06-11 13:35:52.370066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.528 #14 NEW cov: 12020 ft: 13636 corp: 11/55b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 1 ChangeByte- 00:07:59.786 [2024-06-11 13:35:52.440249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000200a cdw11:00000000 00:07:59.787 [2024-06-11 13:35:52.440281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.787 #15 NEW cov: 12020 ft: 13678 corp: 12/57b lim: 10 exec/s: 0 rss: 72Mb L: 2/9 MS: 1 ChangeBit- 00:07:59.787 [2024-06-11 13:35:52.490382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000028e2 cdw11:00000000 00:07:59.787 [2024-06-11 13:35:52.490413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.787 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:07:59.787 #16 NEW cov: 12043 ft: 13713 corp: 13/60b lim: 10 exec/s: 0 rss: 72Mb L: 3/9 MS: 1 InsertByte- 00:07:59.787 [2024-06-11 13:35:52.540723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005df7 cdw11:00000000 00:07:59.787 [2024-06-11 13:35:52.540755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.787 [2024-06-11 13:35:52.540833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:07:59.787 [2024-06-11 13:35:52.540857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.787 #17 NEW cov: 12043 ft: 13757 corp: 14/65b lim: 10 exec/s: 0 rss: 73Mb L: 5/9 MS: 1 EraseBytes- 00:07:59.787 [2024-06-11 13:35:52.620915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:07:59.787 [2024-06-11 13:35:52.620947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:59.787 [2024-06-11 13:35:52.621025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:07:59.787 [2024-06-11 13:35:52.621049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:59.787 #18 NEW cov: 12043 ft: 13799 corp: 15/69b lim: 10 exec/s: 18 rss: 73Mb L: 4/9 MS: 1 EraseBytes- 00:07:59.787 [2024-06-11 13:35:52.690965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002808 cdw11:00000000 00:07:59.787 [2024-06-11 13:35:52.690997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.046 #19 NEW cov: 12043 ft: 13818 corp: 16/71b lim: 10 exec/s: 19 rss: 73Mb L: 2/9 MS: 1 ShuffleBytes- 00:08:00.046 [2024-06-11 13:35:52.741104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3f cdw11:00000000 00:08:00.046 [2024-06-11 13:35:52.741136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.046 #20 NEW cov: 12043 ft: 13833 corp: 17/73b lim: 10 exec/s: 20 rss: 73Mb L: 2/9 MS: 1 InsertByte- 00:08:00.046 [2024-06-11 13:35:52.791473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 00:08:00.046 [2024-06-11 13:35:52.791506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.046 [2024-06-11 13:35:52.791582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:000000f8 cdw11:00000000 00:08:00.046 [2024-06-11 13:35:52.791606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.046 #21 NEW cov: 12043 ft: 13859 corp: 18/78b lim: 10 exec/s: 21 rss: 73Mb L: 5/9 MS: 1 ChangeBit- 00:08:00.046 [2024-06-11 13:35:52.861971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005df7 cdw11:00000000 00:08:00.046 [2024-06-11 13:35:52.862003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.046 [2024-06-11 13:35:52.862080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:00.046 [2024-06-11 13:35:52.862105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.046 [2024-06-11 13:35:52.862180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f7e7 cdw11:00000000 00:08:00.046 [2024-06-11 13:35:52.862208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.046 [2024-06-11 13:35:52.862287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:00.046 [2024-06-11 13:35:52.862311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.046 #22 NEW cov: 12043 ft: 13883 corp: 19/87b lim: 10 exec/s: 22 rss: 73Mb L: 9/9 MS: 1 ShuffleBytes- 00:08:00.046 [2024-06-11 13:35:52.911644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002828 cdw11:00000000 00:08:00.046 [2024-06-11 13:35:52.911677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.046 #28 NEW cov: 12043 ft: 13928 corp: 20/90b lim: 10 exec/s: 28 rss: 73Mb L: 3/9 MS: 1 CopyPart- 00:08:00.305 [2024-06-11 13:35:52.962387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000020f7 cdw11:00000000 00:08:00.305 [2024-06-11 13:35:52.962419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.305 [2024-06-11 13:35:52.962497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7e7 cdw11:00000000 00:08:00.305 [2024-06-11 13:35:52.962521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.305 [2024-06-11 13:35:52.962598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:00.305 [2024-06-11 13:35:52.962622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.305 [2024-06-11 13:35:52.962703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:00.305 [2024-06-11 13:35:52.962726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.305 #32 NEW cov: 12043 ft: 13983 corp: 21/99b lim: 10 exec/s: 32 rss: 73Mb L: 9/9 MS: 4 EraseBytes-ShuffleBytes-CopyPart-CrossOver- 00:08:00.305 [2024-06-11 13:35:53.032152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002528 cdw11:00000000 00:08:00.305 [2024-06-11 13:35:53.032185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.305 [2024-06-11 13:35:53.032269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e20a cdw11:00000000 00:08:00.305 [2024-06-11 13:35:53.032293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.305 #33 NEW cov: 12043 ft: 14029 corp: 22/103b lim: 10 exec/s: 33 rss: 73Mb L: 4/9 MS: 1 InsertByte- 00:08:00.305 [2024-06-11 13:35:53.102675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000a318 cdw11:00000000 00:08:00.305 [2024-06-11 13:35:53.102712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.305 [2024-06-11 13:35:53.102790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000808 cdw11:00000000 00:08:00.305 [2024-06-11 13:35:53.102814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.305 [2024-06-11 13:35:53.102894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000808 cdw11:00000000 00:08:00.305 [2024-06-11 13:35:53.102917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.305 [2024-06-11 13:35:53.102995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000820b cdw11:00000000 00:08:00.305 [2024-06-11 13:35:53.103019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.305 #34 NEW cov: 12043 ft: 14060 corp: 23/112b lim: 10 exec/s: 34 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:08:00.305 [2024-06-11 13:35:53.182880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000020de cdw11:00000000 00:08:00.305 [2024-06-11 13:35:53.182913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.305 [2024-06-11 13:35:53.182992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7e7 cdw11:00000000 00:08:00.305 [2024-06-11 13:35:53.183016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.305 [2024-06-11 13:35:53.183094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:00.305 [2024-06-11 13:35:53.183118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.305 [2024-06-11 13:35:53.183203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:00.305 [2024-06-11 13:35:53.183226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.564 #35 NEW cov: 12043 ft: 14086 corp: 24/121b lim: 10 exec/s: 35 rss: 73Mb L: 9/9 MS: 1 ChangeByte- 00:08:00.564 [2024-06-11 13:35:53.262994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.564 [2024-06-11 13:35:53.263026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.564 [2024-06-11 13:35:53.263107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:08:00.564 [2024-06-11 13:35:53.263130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.564 [2024-06-11 13:35:53.263227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000290a cdw11:00000000 00:08:00.564 [2024-06-11 13:35:53.263252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.564 #36 NEW cov: 12043 ft: 14261 corp: 25/127b lim: 10 exec/s: 36 rss: 74Mb L: 6/9 MS: 1 InsertRepeatedBytes- 00:08:00.564 [2024-06-11 13:35:53.343066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:00.564 [2024-06-11 13:35:53.343099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.564 [2024-06-11 13:35:53.343179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f5f7 cdw11:00000000 00:08:00.564 [2024-06-11 13:35:53.343213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.564 #37 NEW cov: 12043 ft: 14269 corp: 26/131b lim: 10 exec/s: 37 rss: 74Mb L: 4/9 MS: 1 ChangeBit- 00:08:00.564 [2024-06-11 13:35:53.423287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000087f7 cdw11:00000000 00:08:00.564 [2024-06-11 13:35:53.423322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.564 [2024-06-11 13:35:53.423401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:00.564 [2024-06-11 13:35:53.423426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.565 #38 NEW cov: 12043 ft: 14345 corp: 27/135b lim: 10 exec/s: 38 rss: 74Mb L: 4/9 MS: 1 ChangeByte- 00:08:00.565 [2024-06-11 13:35:53.473240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:08:00.565 [2024-06-11 13:35:53.473274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.824 #39 NEW cov: 12043 ft: 14351 corp: 28/138b lim: 10 exec/s: 39 rss: 74Mb L: 3/9 MS: 1 InsertByte- 00:08:00.824 [2024-06-11 13:35:53.523572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000028e2 cdw11:00000000 00:08:00.824 [2024-06-11 13:35:53.523604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.824 [2024-06-11 13:35:53.523683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000ac2 cdw11:00000000 00:08:00.824 [2024-06-11 13:35:53.523708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.824 #40 NEW cov: 12043 ft: 14405 corp: 29/142b lim: 10 exec/s: 40 rss: 74Mb L: 4/9 MS: 1 InsertByte- 00:08:00.824 [2024-06-11 13:35:53.573552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000280a cdw11:00000000 00:08:00.824 [2024-06-11 13:35:53.573586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.824 #41 NEW cov: 12043 ft: 14421 corp: 30/144b lim: 10 exec/s: 41 rss: 74Mb L: 2/9 MS: 1 ShuffleBytes- 00:08:00.824 [2024-06-11 13:35:53.624207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000020de cdw11:00000000 00:08:00.824 [2024-06-11 13:35:53.624241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:00.824 [2024-06-11 13:35:53.624320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000f7e7 cdw11:00000000 00:08:00.824 [2024-06-11 13:35:53.624344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:00.824 [2024-06-11 13:35:53.624422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:00.824 [2024-06-11 13:35:53.624446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:00.824 [2024-06-11 13:35:53.624528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f7f7 cdw11:00000000 00:08:00.824 [2024-06-11 13:35:53.624549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:00.824 #42 NEW cov: 12043 ft: 14430 corp: 31/153b lim: 10 exec/s: 21 rss: 74Mb L: 9/9 MS: 1 ChangeByte- 00:08:00.824 #42 DONE cov: 12043 ft: 14430 corp: 31/153b lim: 10 exec/s: 21 rss: 74Mb 00:08:00.824 Done 42 runs in 2 second(s) 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4408 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:01.084 13:35:53 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:08:01.084 [2024-06-11 13:35:53.878140] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:01.084 [2024-06-11 13:35:53.878225] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3440289 ] 00:08:01.084 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.343 [2024-06-11 13:35:54.204079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.603 [2024-06-11 13:35:54.300975] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.603 [2024-06-11 13:35:54.364773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.603 [2024-06-11 13:35:54.381122] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:08:01.603 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.603 INFO: Seed: 2841754856 00:08:01.603 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:01.603 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:01.603 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:08:01.603 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.603 [2024-06-11 13:35:54.429044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.603 [2024-06-11 13:35:54.429080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.603 #2 INITED cov: 11827 ft: 11825 corp: 1/1b exec/s: 0 rss: 70Mb 00:08:01.603 [2024-06-11 13:35:54.479063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.603 [2024-06-11 13:35:54.479096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.862 #3 NEW cov: 11957 ft: 12553 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 CopyPart- 00:08:01.862 [2024-06-11 13:35:54.559282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.559314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.863 #4 NEW cov: 11963 ft: 12697 corp: 3/3b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 CrossOver- 00:08:01.863 [2024-06-11 13:35:54.610002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.610035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.863 [2024-06-11 13:35:54.610116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.610140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.863 [2024-06-11 13:35:54.610230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.610254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.863 [2024-06-11 13:35:54.610336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.610359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.863 #5 NEW cov: 12048 ft: 13677 corp: 4/7b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:08:01.863 [2024-06-11 13:35:54.690219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.690251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.863 [2024-06-11 13:35:54.690332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.690356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.863 [2024-06-11 13:35:54.690440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.690462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.863 [2024-06-11 13:35:54.690543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.690566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:01.863 #6 NEW cov: 12048 ft: 13805 corp: 5/11b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ChangeBit- 00:08:01.863 [2024-06-11 13:35:54.770497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.770530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:01.863 [2024-06-11 13:35:54.770611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.770636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:01.863 [2024-06-11 13:35:54.770722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.770746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:01.863 [2024-06-11 13:35:54.770829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:01.863 [2024-06-11 13:35:54.770852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.122 #7 NEW cov: 12048 ft: 13858 corp: 6/15b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 CopyPart- 00:08:02.122 [2024-06-11 13:35:54.850716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.850750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.122 [2024-06-11 13:35:54.850832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.850856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.122 [2024-06-11 13:35:54.850937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.850961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.122 [2024-06-11 13:35:54.851040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.851061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.122 #8 NEW cov: 12048 ft: 13905 corp: 7/19b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 ChangeByte- 00:08:02.122 [2024-06-11 13:35:54.900866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.900899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.122 [2024-06-11 13:35:54.900981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.901003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.122 [2024-06-11 13:35:54.901084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.901107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.122 [2024-06-11 13:35:54.901187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.901216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.122 #9 NEW cov: 12048 ft: 13926 corp: 8/23b lim: 5 exec/s: 0 rss: 71Mb L: 4/4 MS: 1 CopyPart- 00:08:02.122 [2024-06-11 13:35:54.981252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.981290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.122 [2024-06-11 13:35:54.981373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.981397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.122 [2024-06-11 13:35:54.981479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.981503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.122 [2024-06-11 13:35:54.981583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.981606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.122 [2024-06-11 13:35:54.981686] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:54.981709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:02.122 #10 NEW cov: 12048 ft: 14023 corp: 9/28b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CopyPart- 00:08:02.122 [2024-06-11 13:35:55.031046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.122 [2024-06-11 13:35:55.031078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.123 [2024-06-11 13:35:55.031159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.123 [2024-06-11 13:35:55.031184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.123 [2024-06-11 13:35:55.031271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.123 [2024-06-11 13:35:55.031295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.381 #11 NEW cov: 12048 ft: 14237 corp: 10/31b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 CrossOver- 00:08:02.381 [2024-06-11 13:35:55.090961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.381 [2024-06-11 13:35:55.090994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.381 [2024-06-11 13:35:55.091074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.381 [2024-06-11 13:35:55.091098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.381 #12 NEW cov: 12048 ft: 14442 corp: 11/33b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 EraseBytes- 00:08:02.381 [2024-06-11 13:35:55.171617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.381 [2024-06-11 13:35:55.171650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.381 [2024-06-11 13:35:55.171732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.381 [2024-06-11 13:35:55.171761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.381 [2024-06-11 13:35:55.171842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.381 [2024-06-11 13:35:55.171866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.381 [2024-06-11 13:35:55.171947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.381 [2024-06-11 13:35:55.171972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.381 #13 NEW cov: 12048 ft: 14478 corp: 12/37b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 ShuffleBytes- 00:08:02.381 [2024-06-11 13:35:55.221566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.381 [2024-06-11 13:35:55.221598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.381 [2024-06-11 13:35:55.221678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.381 [2024-06-11 13:35:55.221702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.381 [2024-06-11 13:35:55.221783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.381 [2024-06-11 13:35:55.221805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.381 #14 NEW cov: 12048 ft: 14501 corp: 13/40b lim: 5 exec/s: 0 rss: 72Mb L: 3/5 MS: 1 ChangeBinInt- 00:08:02.382 [2024-06-11 13:35:55.271337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.382 [2024-06-11 13:35:55.271370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.640 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:02.640 #15 NEW cov: 12071 ft: 14513 corp: 14/41b lim: 5 exec/s: 0 rss: 72Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:02.640 [2024-06-11 13:35:55.411940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.640 [2024-06-11 13:35:55.411980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.640 #16 NEW cov: 12071 ft: 14537 corp: 15/42b lim: 5 exec/s: 16 rss: 72Mb L: 1/5 MS: 1 CrossOver- 00:08:02.640 [2024-06-11 13:35:55.492527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.640 [2024-06-11 13:35:55.492561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.640 [2024-06-11 13:35:55.492652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.641 [2024-06-11 13:35:55.492678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.641 [2024-06-11 13:35:55.492767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.641 [2024-06-11 13:35:55.492795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.641 #17 NEW cov: 12071 ft: 14614 corp: 16/45b lim: 5 exec/s: 17 rss: 72Mb L: 3/5 MS: 1 ChangeBit- 00:08:02.899 [2024-06-11 13:35:55.572932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.899 [2024-06-11 13:35:55.572965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.899 [2024-06-11 13:35:55.573052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.899 [2024-06-11 13:35:55.573077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.899 [2024-06-11 13:35:55.573165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.899 [2024-06-11 13:35:55.573189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.899 [2024-06-11 13:35:55.573283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.899 [2024-06-11 13:35:55.573308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:02.899 #18 NEW cov: 12071 ft: 14629 corp: 17/49b lim: 5 exec/s: 18 rss: 73Mb L: 4/5 MS: 1 ChangeBit- 00:08:02.899 [2024-06-11 13:35:55.652960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.899 [2024-06-11 13:35:55.652992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.899 [2024-06-11 13:35:55.653081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.899 [2024-06-11 13:35:55.653105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.899 [2024-06-11 13:35:55.653195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.899 [2024-06-11 13:35:55.653226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.899 #19 NEW cov: 12071 ft: 14651 corp: 18/52b lim: 5 exec/s: 19 rss: 73Mb L: 3/5 MS: 1 InsertByte- 00:08:02.899 [2024-06-11 13:35:55.733229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.899 [2024-06-11 13:35:55.733261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:02.899 [2024-06-11 13:35:55.733347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.899 [2024-06-11 13:35:55.733371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:02.899 [2024-06-11 13:35:55.733458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:02.899 [2024-06-11 13:35:55.733482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:02.899 #20 NEW cov: 12071 ft: 14652 corp: 19/55b lim: 5 exec/s: 20 rss: 73Mb L: 3/5 MS: 1 ChangeBit- 00:08:03.158 [2024-06-11 13:35:55.813420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:55.813452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.158 [2024-06-11 13:35:55.813540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:55.813564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.158 [2024-06-11 13:35:55.813650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:55.813676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.158 #21 NEW cov: 12071 ft: 14666 corp: 20/58b lim: 5 exec/s: 21 rss: 73Mb L: 3/5 MS: 1 CrossOver- 00:08:03.158 [2024-06-11 13:35:55.873672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:55.873704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.158 [2024-06-11 13:35:55.873793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:55.873818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.158 [2024-06-11 13:35:55.873903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:55.873931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.158 #22 NEW cov: 12071 ft: 14680 corp: 21/61b lim: 5 exec/s: 22 rss: 73Mb L: 3/5 MS: 1 ChangeBit- 00:08:03.158 [2024-06-11 13:35:55.923963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:55.923994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.158 [2024-06-11 13:35:55.924083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:55.924106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.158 [2024-06-11 13:35:55.924193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:55.924223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.158 [2024-06-11 13:35:55.924308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:55.924330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.158 #23 NEW cov: 12071 ft: 14727 corp: 22/65b lim: 5 exec/s: 23 rss: 73Mb L: 4/5 MS: 1 ChangeBinInt- 00:08:03.158 [2024-06-11 13:35:56.004278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:56.004311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.158 [2024-06-11 13:35:56.004400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:56.004425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.158 [2024-06-11 13:35:56.004513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.158 [2024-06-11 13:35:56.004536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.159 [2024-06-11 13:35:56.004621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.159 [2024-06-11 13:35:56.004644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.159 #24 NEW cov: 12071 ft: 14735 corp: 23/69b lim: 5 exec/s: 24 rss: 73Mb L: 4/5 MS: 1 InsertByte- 00:08:03.159 [2024-06-11 13:35:56.054509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.159 [2024-06-11 13:35:56.054541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.159 [2024-06-11 13:35:56.054628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.159 [2024-06-11 13:35:56.054652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.159 [2024-06-11 13:35:56.054738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.159 [2024-06-11 13:35:56.054763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.159 [2024-06-11 13:35:56.054849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.159 [2024-06-11 13:35:56.054870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.417 #25 NEW cov: 12071 ft: 14789 corp: 24/73b lim: 5 exec/s: 25 rss: 73Mb L: 4/5 MS: 1 ChangeBit- 00:08:03.417 [2024-06-11 13:35:56.104381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.417 [2024-06-11 13:35:56.104413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.417 [2024-06-11 13:35:56.104504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.417 [2024-06-11 13:35:56.104529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.417 [2024-06-11 13:35:56.104615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.417 [2024-06-11 13:35:56.104639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.417 #26 NEW cov: 12071 ft: 14822 corp: 25/76b lim: 5 exec/s: 26 rss: 73Mb L: 3/5 MS: 1 ShuffleBytes- 00:08:03.417 [2024-06-11 13:35:56.184883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.417 [2024-06-11 13:35:56.184917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.417 [2024-06-11 13:35:56.185011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.417 [2024-06-11 13:35:56.185036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.417 [2024-06-11 13:35:56.185124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.417 [2024-06-11 13:35:56.185147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.417 [2024-06-11 13:35:56.185252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.417 [2024-06-11 13:35:56.185278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.417 #27 NEW cov: 12071 ft: 14829 corp: 26/80b lim: 5 exec/s: 27 rss: 74Mb L: 4/5 MS: 1 CrossOver- 00:08:03.417 [2024-06-11 13:35:56.265123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.417 [2024-06-11 13:35:56.265157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.417 [2024-06-11 13:35:56.265224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.417 [2024-06-11 13:35:56.265249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.417 [2024-06-11 13:35:56.265334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.417 [2024-06-11 13:35:56.265358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.417 [2024-06-11 13:35:56.265441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.417 [2024-06-11 13:35:56.265465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.417 #28 NEW cov: 12071 ft: 14845 corp: 27/84b lim: 5 exec/s: 28 rss: 74Mb L: 4/5 MS: 1 InsertByte- 00:08:03.676 [2024-06-11 13:35:56.345313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.676 [2024-06-11 13:35:56.345348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.676 [2024-06-11 13:35:56.345435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.676 [2024-06-11 13:35:56.345460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.676 [2024-06-11 13:35:56.345547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.676 [2024-06-11 13:35:56.345572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.676 [2024-06-11 13:35:56.345659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.676 [2024-06-11 13:35:56.345681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.676 [2024-06-11 13:35:56.425776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.676 [2024-06-11 13:35:56.425809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:03.676 [2024-06-11 13:35:56.425895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.676 [2024-06-11 13:35:56.425919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:03.676 [2024-06-11 13:35:56.426008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.676 [2024-06-11 13:35:56.426034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:03.676 [2024-06-11 13:35:56.426119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.676 [2024-06-11 13:35:56.426143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:03.676 [2024-06-11 13:35:56.426234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:03.676 [2024-06-11 13:35:56.426259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:03.676 #30 NEW cov: 12071 ft: 14874 corp: 28/89b lim: 5 exec/s: 15 rss: 74Mb L: 5/5 MS: 2 CrossOver-CopyPart- 00:08:03.676 #30 DONE cov: 12071 ft: 14874 corp: 28/89b lim: 5 exec/s: 15 rss: 74Mb 00:08:03.676 Done 30 runs in 2 second(s) 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4409 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:03.935 13:35:56 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:08:03.935 [2024-06-11 13:35:56.652828] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:03.935 [2024-06-11 13:35:56.652908] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3440731 ] 00:08:03.935 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.193 [2024-06-11 13:35:56.964651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.193 [2024-06-11 13:35:57.075646] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.452 [2024-06-11 13:35:57.139485] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.452 [2024-06-11 13:35:57.155837] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:08:04.452 INFO: Running with entropic power schedule (0xFF, 100). 00:08:04.452 INFO: Seed: 1322776831 00:08:04.452 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:04.452 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:04.452 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:08:04.452 INFO: A corpus is not provided, starting from an empty corpus 00:08:04.452 [2024-06-11 13:35:57.205037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.452 [2024-06-11 13:35:57.205074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.452 #2 INITED cov: 11827 ft: 11815 corp: 1/1b exec/s: 0 rss: 70Mb 00:08:04.452 [2024-06-11 13:35:57.255096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.452 [2024-06-11 13:35:57.255130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.452 #3 NEW cov: 11957 ft: 12532 corp: 2/2b lim: 5 exec/s: 0 rss: 70Mb L: 1/1 MS: 1 ChangeBit- 00:08:04.452 [2024-06-11 13:35:57.336123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.452 [2024-06-11 13:35:57.336156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.452 [2024-06-11 13:35:57.336240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.453 [2024-06-11 13:35:57.336264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.453 [2024-06-11 13:35:57.336345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.453 [2024-06-11 13:35:57.336368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.453 [2024-06-11 13:35:57.336448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.453 [2024-06-11 13:35:57.336470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.453 [2024-06-11 13:35:57.336550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.453 [2024-06-11 13:35:57.336574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:04.712 #4 NEW cov: 11963 ft: 13501 corp: 3/7b lim: 5 exec/s: 0 rss: 70Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:08:04.712 [2024-06-11 13:35:57.415582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.712 [2024-06-11 13:35:57.415616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.712 #5 NEW cov: 12048 ft: 13739 corp: 4/8b lim: 5 exec/s: 0 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:08:04.712 [2024-06-11 13:35:57.476510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.712 [2024-06-11 13:35:57.476542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.712 [2024-06-11 13:35:57.476624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.712 [2024-06-11 13:35:57.476648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.712 [2024-06-11 13:35:57.476725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.712 [2024-06-11 13:35:57.476748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:04.712 [2024-06-11 13:35:57.476828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.712 [2024-06-11 13:35:57.476852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:04.712 [2024-06-11 13:35:57.476933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.712 [2024-06-11 13:35:57.476954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:04.712 #6 NEW cov: 12048 ft: 13840 corp: 5/13b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 CMP- DE: "\001\000\000\012"- 00:08:04.712 [2024-06-11 13:35:57.556162] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.712 [2024-06-11 13:35:57.556203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.712 [2024-06-11 13:35:57.556290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.712 [2024-06-11 13:35:57.556315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.712 #7 NEW cov: 12048 ft: 14086 corp: 6/15b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 CopyPart- 00:08:04.970 [2024-06-11 13:35:57.636333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.970 [2024-06-11 13:35:57.636366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.970 [2024-06-11 13:35:57.636448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.970 [2024-06-11 13:35:57.636472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.970 #8 NEW cov: 12048 ft: 14162 corp: 7/17b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ShuffleBytes- 00:08:04.970 [2024-06-11 13:35:57.716619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.970 [2024-06-11 13:35:57.716658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.970 [2024-06-11 13:35:57.716736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.970 [2024-06-11 13:35:57.716761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.970 #9 NEW cov: 12048 ft: 14192 corp: 8/19b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeBit- 00:08:04.970 [2024-06-11 13:35:57.796807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.970 [2024-06-11 13:35:57.796841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.970 [2024-06-11 13:35:57.796921] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.970 [2024-06-11 13:35:57.796945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.970 #10 NEW cov: 12048 ft: 14227 corp: 9/21b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ShuffleBytes- 00:08:04.970 [2024-06-11 13:35:57.846917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.970 [2024-06-11 13:35:57.846961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:04.970 [2024-06-11 13:35:57.847039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:04.970 [2024-06-11 13:35:57.847063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:04.970 #11 NEW cov: 12048 ft: 14292 corp: 10/23b lim: 5 exec/s: 0 rss: 71Mb L: 2/5 MS: 1 ChangeBinInt- 00:08:05.229 [2024-06-11 13:35:57.897462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:57.897494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.229 [2024-06-11 13:35:57.897576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:57.897600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.229 [2024-06-11 13:35:57.897679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:57.897702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.229 [2024-06-11 13:35:57.897781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:57.897805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.229 #12 NEW cov: 12048 ft: 14338 corp: 11/27b lim: 5 exec/s: 0 rss: 71Mb L: 4/5 MS: 1 CopyPart- 00:08:05.229 [2024-06-11 13:35:57.977546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:57.977579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.229 [2024-06-11 13:35:57.977664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:57.977690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.229 [2024-06-11 13:35:57.977770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:57.977793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.229 #13 NEW cov: 12048 ft: 14502 corp: 12/30b lim: 5 exec/s: 0 rss: 71Mb L: 3/5 MS: 1 CopyPart- 00:08:05.229 [2024-06-11 13:35:58.028068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:58.028100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.229 [2024-06-11 13:35:58.028179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:58.028208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.229 [2024-06-11 13:35:58.028288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:58.028311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.229 [2024-06-11 13:35:58.028389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:58.028412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.229 [2024-06-11 13:35:58.028490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:58.028513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:05.229 #14 NEW cov: 12048 ft: 14520 corp: 13/35b lim: 5 exec/s: 0 rss: 71Mb L: 5/5 MS: 1 ChangeBit- 00:08:05.229 [2024-06-11 13:35:58.078187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:58.078226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.229 [2024-06-11 13:35:58.078305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.229 [2024-06-11 13:35:58.078329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.229 [2024-06-11 13:35:58.078408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.230 [2024-06-11 13:35:58.078431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.230 [2024-06-11 13:35:58.078509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.230 [2024-06-11 13:35:58.078533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:05.230 [2024-06-11 13:35:58.078618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.230 [2024-06-11 13:35:58.078640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:05.501 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:05.501 #15 NEW cov: 12071 ft: 14605 corp: 14/40b lim: 5 exec/s: 15 rss: 72Mb L: 5/5 MS: 1 ChangeBit- 00:08:05.501 [2024-06-11 13:35:58.300675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.501 [2024-06-11 13:35:58.300725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.501 [2024-06-11 13:35:58.300826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.501 [2024-06-11 13:35:58.300846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.501 [2024-06-11 13:35:58.300945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.501 [2024-06-11 13:35:58.300962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:05.501 #16 NEW cov: 12071 ft: 14661 corp: 15/43b lim: 5 exec/s: 16 rss: 72Mb L: 3/5 MS: 1 CrossOver- 00:08:05.501 [2024-06-11 13:35:58.370059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.501 [2024-06-11 13:35:58.370093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.501 #17 NEW cov: 12071 ft: 14750 corp: 16/44b lim: 5 exec/s: 17 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:08:05.769 [2024-06-11 13:35:58.440824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.769 [2024-06-11 13:35:58.440857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.769 [2024-06-11 13:35:58.440950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.769 [2024-06-11 13:35:58.440970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.769 #18 NEW cov: 12071 ft: 14802 corp: 17/46b lim: 5 exec/s: 18 rss: 72Mb L: 2/5 MS: 1 ChangeBit- 00:08:05.769 [2024-06-11 13:35:58.501206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.769 [2024-06-11 13:35:58.501239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.769 [2024-06-11 13:35:58.501330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.769 [2024-06-11 13:35:58.501349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.769 #19 NEW cov: 12071 ft: 14824 corp: 18/48b lim: 5 exec/s: 19 rss: 72Mb L: 2/5 MS: 1 ChangeBit- 00:08:05.769 [2024-06-11 13:35:58.561285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.769 [2024-06-11 13:35:58.561318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.769 [2024-06-11 13:35:58.561420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.769 [2024-06-11 13:35:58.561440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:05.769 #20 NEW cov: 12071 ft: 14870 corp: 19/50b lim: 5 exec/s: 20 rss: 72Mb L: 2/5 MS: 1 CrossOver- 00:08:05.769 [2024-06-11 13:35:58.651814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.769 [2024-06-11 13:35:58.651846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:05.769 [2024-06-11 13:35:58.651945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:05.769 [2024-06-11 13:35:58.651962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.028 #21 NEW cov: 12071 ft: 14915 corp: 20/52b lim: 5 exec/s: 21 rss: 72Mb L: 2/5 MS: 1 EraseBytes- 00:08:06.028 [2024-06-11 13:35:58.742325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-06-11 13:35:58.742358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.028 [2024-06-11 13:35:58.742450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-06-11 13:35:58.742468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.028 #22 NEW cov: 12071 ft: 14937 corp: 21/54b lim: 5 exec/s: 22 rss: 72Mb L: 2/5 MS: 1 ChangeBit- 00:08:06.028 [2024-06-11 13:35:58.832213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-06-11 13:35:58.832245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.028 #23 NEW cov: 12071 ft: 14958 corp: 22/55b lim: 5 exec/s: 23 rss: 72Mb L: 1/5 MS: 1 EraseBytes- 00:08:06.028 [2024-06-11 13:35:58.923835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-06-11 13:35:58.923868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.028 [2024-06-11 13:35:58.923970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-06-11 13:35:58.923989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.028 [2024-06-11 13:35:58.924083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-06-11 13:35:58.924101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.028 [2024-06-11 13:35:58.924205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.028 [2024-06-11 13:35:58.924225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.287 #24 NEW cov: 12071 ft: 14972 corp: 23/59b lim: 5 exec/s: 24 rss: 72Mb L: 4/5 MS: 1 EraseBytes- 00:08:06.287 [2024-06-11 13:35:58.994161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.287 [2024-06-11 13:35:58.994205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.287 [2024-06-11 13:35:58.994290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.287 [2024-06-11 13:35:58.994308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.287 [2024-06-11 13:35:58.994401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.287 [2024-06-11 13:35:58.994421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:06.287 [2024-06-11 13:35:58.994506] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.287 [2024-06-11 13:35:58.994526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:06.287 #25 NEW cov: 12071 ft: 15029 corp: 24/63b lim: 5 exec/s: 25 rss: 72Mb L: 4/5 MS: 1 CopyPart- 00:08:06.287 [2024-06-11 13:35:59.084045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.287 [2024-06-11 13:35:59.084079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.287 [2024-06-11 13:35:59.084167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.287 [2024-06-11 13:35:59.084185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.287 #26 NEW cov: 12071 ft: 15042 corp: 25/65b lim: 5 exec/s: 26 rss: 73Mb L: 2/5 MS: 1 InsertByte- 00:08:06.287 [2024-06-11 13:35:59.174568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.287 [2024-06-11 13:35:59.174603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:06.287 [2024-06-11 13:35:59.174697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:06.287 [2024-06-11 13:35:59.174716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:06.546 #27 NEW cov: 12071 ft: 15047 corp: 26/67b lim: 5 exec/s: 13 rss: 73Mb L: 2/5 MS: 1 ChangeBit- 00:08:06.546 #27 DONE cov: 12071 ft: 15047 corp: 26/67b lim: 5 exec/s: 13 rss: 73Mb 00:08:06.546 ###### Recommended dictionary. ###### 00:08:06.546 "\001\000\000\012" # Uses: 0 00:08:06.546 ###### End of recommended dictionary. ###### 00:08:06.546 Done 27 runs in 2 second(s) 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4410 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:06.546 13:35:59 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:08:06.546 [2024-06-11 13:35:59.420871] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:06.546 [2024-06-11 13:35:59.420954] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441168 ] 00:08:06.546 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.114 [2024-06-11 13:35:59.741221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.114 [2024-06-11 13:35:59.852745] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.114 [2024-06-11 13:35:59.916655] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.114 [2024-06-11 13:35:59.933017] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:08:07.115 INFO: Running with entropic power schedule (0xFF, 100). 00:08:07.115 INFO: Seed: 4101767321 00:08:07.115 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:07.115 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:07.115 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:08:07.115 INFO: A corpus is not provided, starting from an empty corpus 00:08:07.115 #2 INITED exec/s: 0 rss: 65Mb 00:08:07.115 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:07.115 This may also happen if the target rejected all inputs we tried so far 00:08:07.115 [2024-06-11 13:35:59.988082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.115 [2024-06-11 13:35:59.988124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.115 [2024-06-11 13:35:59.988177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.115 [2024-06-11 13:35:59.988210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.374 NEW_FUNC[1/684]: 0x48fcf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:08:07.374 NEW_FUNC[2/684]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:07.374 #15 NEW cov: 11814 ft: 11842 corp: 2/24b lim: 40 exec/s: 0 rss: 72Mb L: 23/23 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:08:07.374 [2024-06-11 13:36:00.258903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.374 [2024-06-11 13:36:00.258969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.374 [2024-06-11 13:36:00.259021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d89 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.374 [2024-06-11 13:36:00.259043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.633 NEW_FUNC[1/2]: 0x1a6b4c0 in event_queue_run_batch /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:534 00:08:07.633 NEW_FUNC[2/2]: 0x1a706a0 in _reactor_run /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:886 00:08:07.633 #16 NEW cov: 11980 ft: 12417 corp: 3/47b lim: 40 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 ChangeByte- 00:08:07.633 [2024-06-11 13:36:00.389107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.633 [2024-06-11 13:36:00.389153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.633 [2024-06-11 13:36:00.389212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d89 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.633 [2024-06-11 13:36:00.389235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.633 #17 NEW cov: 11986 ft: 12555 corp: 4/70b lim: 40 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 ChangeByte- 00:08:07.633 [2024-06-11 13:36:00.519458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.633 [2024-06-11 13:36:00.519501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.633 [2024-06-11 13:36:00.519553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.633 [2024-06-11 13:36:00.519575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.892 #18 NEW cov: 12071 ft: 12882 corp: 5/93b lim: 40 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 CopyPart- 00:08:07.892 [2024-06-11 13:36:00.609684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d393d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.892 [2024-06-11 13:36:00.609726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.892 [2024-06-11 13:36:00.609778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.892 [2024-06-11 13:36:00.609800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.892 #19 NEW cov: 12071 ft: 13000 corp: 6/116b lim: 40 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 ChangeBinInt- 00:08:07.892 [2024-06-11 13:36:00.690093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.892 [2024-06-11 13:36:00.690134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:07.892 [2024-06-11 13:36:00.690186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d393d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.892 [2024-06-11 13:36:00.690216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:07.892 [2024-06-11 13:36:00.690264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.892 [2024-06-11 13:36:00.690290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:07.892 [2024-06-11 13:36:00.690335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:3d3d3d3d cdw11:3d3d0a3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.892 [2024-06-11 13:36:00.690356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:07.892 #20 NEW cov: 12071 ft: 13624 corp: 7/155b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 CrossOver- 00:08:07.892 [2024-06-11 13:36:00.780058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:07.892 [2024-06-11 13:36:00.780099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.168 #21 NEW cov: 12071 ft: 14078 corp: 8/170b lim: 40 exec/s: 0 rss: 72Mb L: 15/39 MS: 1 EraseBytes- 00:08:08.168 [2024-06-11 13:36:00.870564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.168 [2024-06-11 13:36:00.870605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.168 [2024-06-11 13:36:00.870657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.168 [2024-06-11 13:36:00.870679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.168 [2024-06-11 13:36:00.870724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.168 [2024-06-11 13:36:00.870745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.168 [2024-06-11 13:36:00.870790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.168 [2024-06-11 13:36:00.870811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.168 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:08.168 #22 NEW cov: 12088 ft: 14203 corp: 9/206b lim: 40 exec/s: 22 rss: 72Mb L: 36/39 MS: 1 InsertRepeatedBytes- 00:08:08.168 [2024-06-11 13:36:00.991645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:353d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.168 [2024-06-11 13:36:00.991679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.168 [2024-06-11 13:36:00.991768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d89 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.168 [2024-06-11 13:36:00.991793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.168 #23 NEW cov: 12088 ft: 14288 corp: 10/229b lim: 40 exec/s: 23 rss: 72Mb L: 23/39 MS: 1 ChangeBit- 00:08:08.168 [2024-06-11 13:36:01.042292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.168 [2024-06-11 13:36:01.042327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.168 [2024-06-11 13:36:01.042417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d89 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.168 [2024-06-11 13:36:01.042448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.168 [2024-06-11 13:36:01.042538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.168 [2024-06-11 13:36:01.042562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.169 [2024-06-11 13:36:01.042649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:3d3d3d3d cdw11:393d833d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.169 [2024-06-11 13:36:01.042673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.169 [2024-06-11 13:36:01.042761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:3d0a3d3d cdw11:3d0e3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.169 [2024-06-11 13:36:01.042785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.460 #24 NEW cov: 12088 ft: 14403 corp: 11/269b lim: 40 exec/s: 24 rss: 72Mb L: 40/40 MS: 1 CrossOver- 00:08:08.460 [2024-06-11 13:36:01.122386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.122419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.460 [2024-06-11 13:36:01.122507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d393d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.122532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.460 [2024-06-11 13:36:01.122620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.122644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.460 [2024-06-11 13:36:01.122733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:3d3d3d3d cdw11:3d3d0a3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.122755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.460 #25 NEW cov: 12088 ft: 14477 corp: 12/308b lim: 40 exec/s: 25 rss: 72Mb L: 39/40 MS: 1 CrossOver- 00:08:08.460 [2024-06-11 13:36:01.202181] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d17003d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.202220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.460 [2024-06-11 13:36:01.202312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d89 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.202338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.460 #26 NEW cov: 12088 ft: 14539 corp: 13/331b lim: 40 exec/s: 26 rss: 72Mb L: 23/40 MS: 1 ChangeBinInt- 00:08:08.460 [2024-06-11 13:36:01.252696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.252732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.460 [2024-06-11 13:36:01.252820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d89 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.252850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.460 [2024-06-11 13:36:01.252942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3d3d3d3d cdw11:3d0a0e23 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.252967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.460 [2024-06-11 13:36:01.253056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:23232323 cdw11:23232323 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.253079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.460 #27 NEW cov: 12088 ft: 14545 corp: 14/369b lim: 40 exec/s: 27 rss: 72Mb L: 38/40 MS: 1 InsertRepeatedBytes- 00:08:08.460 [2024-06-11 13:36:01.312690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.312723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.460 [2024-06-11 13:36:01.312816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.312841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.460 [2024-06-11 13:36:01.312931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.460 [2024-06-11 13:36:01.312956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.460 #28 NEW cov: 12088 ft: 14759 corp: 15/395b lim: 40 exec/s: 28 rss: 72Mb L: 26/40 MS: 1 CopyPart- 00:08:08.719 [2024-06-11 13:36:01.362981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.719 [2024-06-11 13:36:01.363014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.720 [2024-06-11 13:36:01.363105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d393d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.720 [2024-06-11 13:36:01.363130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.720 [2024-06-11 13:36:01.363224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.720 [2024-06-11 13:36:01.363249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.720 [2024-06-11 13:36:01.363338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:3d3d3d3d cdw11:3d3d0a3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.720 [2024-06-11 13:36:01.363362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.720 #29 NEW cov: 12088 ft: 14767 corp: 16/434b lim: 40 exec/s: 29 rss: 72Mb L: 39/40 MS: 1 ShuffleBytes- 00:08:08.720 [2024-06-11 13:36:01.442864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d0a0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.720 [2024-06-11 13:36:01.442896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.720 [2024-06-11 13:36:01.442972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:23232323 cdw11:23232323 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.720 [2024-06-11 13:36:01.442993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.720 #30 NEW cov: 12088 ft: 14841 corp: 17/457b lim: 40 exec/s: 30 rss: 72Mb L: 23/40 MS: 1 EraseBytes- 00:08:08.720 [2024-06-11 13:36:01.513605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.720 [2024-06-11 13:36:01.513638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.720 [2024-06-11 13:36:01.513712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d393d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.720 [2024-06-11 13:36:01.513729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.720 [2024-06-11 13:36:01.513803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3d3d3d3d cdw11:3d3dff3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.720 [2024-06-11 13:36:01.513820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.720 [2024-06-11 13:36:01.513892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.720 [2024-06-11 13:36:01.513910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.720 [2024-06-11 13:36:01.513984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:3d0e3d3d cdw11:0a3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.720 [2024-06-11 13:36:01.514000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:08.720 #31 NEW cov: 12088 ft: 14864 corp: 18/497b lim: 40 exec/s: 31 rss: 72Mb L: 40/40 MS: 1 InsertByte- 00:08:08.720 [2024-06-11 13:36:01.593115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.720 [2024-06-11 13:36:01.593147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.979 #32 NEW cov: 12088 ft: 14890 corp: 19/511b lim: 40 exec/s: 32 rss: 72Mb L: 14/40 MS: 1 EraseBytes- 00:08:08.979 [2024-06-11 13:36:01.653522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.653555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.979 [2024-06-11 13:36:01.653633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:003d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.653650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.979 #33 NEW cov: 12088 ft: 14903 corp: 20/534b lim: 40 exec/s: 33 rss: 73Mb L: 23/40 MS: 1 InsertRepeatedBytes- 00:08:08.979 [2024-06-11 13:36:01.723862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d17003d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.723896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.979 [2024-06-11 13:36:01.723977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d89 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.723998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.979 [2024-06-11 13:36:01.724077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.724094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.979 #34 NEW cov: 12088 ft: 14933 corp: 21/559b lim: 40 exec/s: 34 rss: 73Mb L: 25/40 MS: 1 CopyPart- 00:08:08.979 [2024-06-11 13:36:01.804284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d393d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.804317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.979 [2024-06-11 13:36:01.804391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.804408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.979 [2024-06-11 13:36:01.804481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.804497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.979 [2024-06-11 13:36:01.804571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:3d3d3d3d cdw11:3d3d0a0e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.804587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:08.979 #35 NEW cov: 12095 ft: 14938 corp: 22/591b lim: 40 exec/s: 35 rss: 73Mb L: 32/40 MS: 1 CrossOver- 00:08:08.979 [2024-06-11 13:36:01.884529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.884561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:08.979 [2024-06-11 13:36:01.884635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.884652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:08.979 [2024-06-11 13:36:01.884726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffff3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.884752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:08.979 [2024-06-11 13:36:01.884823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.979 [2024-06-11 13:36:01.884840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:09.239 #36 NEW cov: 12095 ft: 14962 corp: 23/629b lim: 40 exec/s: 36 rss: 73Mb L: 38/40 MS: 1 CopyPart- 00:08:09.239 [2024-06-11 13:36:01.964385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:3d3d3d3d cdw11:3d3d3d3d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.239 [2024-06-11 13:36:01.964417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:09.239 [2024-06-11 13:36:01.964493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:3d833d3d cdw11:0a200000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:09.239 [2024-06-11 13:36:01.964515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:09.239 #37 NEW cov: 12095 ft: 14988 corp: 24/647b lim: 40 exec/s: 18 rss: 73Mb L: 18/40 MS: 1 CMP- DE: " \000\000\000"- 00:08:09.239 #37 DONE cov: 12095 ft: 14988 corp: 24/647b lim: 40 exec/s: 18 rss: 73Mb 00:08:09.239 ###### Recommended dictionary. ###### 00:08:09.239 " \000\000\000" # Uses: 0 00:08:09.239 ###### End of recommended dictionary. ###### 00:08:09.239 Done 37 runs in 2 second(s) 00:08:09.239 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:08:09.239 13:36:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:09.239 13:36:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:09.239 13:36:02 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:08:09.239 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:08:09.239 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:09.239 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:09.239 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:09.239 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:08:09.239 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:09.239 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:09.497 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:08:09.497 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4411 00:08:09.497 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:09.497 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:08:09.497 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:09.497 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:09.497 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:09.497 13:36:02 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:08:09.497 [2024-06-11 13:36:02.189713] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:09.497 [2024-06-11 13:36:02.189795] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441726 ] 00:08:09.497 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.755 [2024-06-11 13:36:02.506598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.755 [2024-06-11 13:36:02.618408] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.014 [2024-06-11 13:36:02.682271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.014 [2024-06-11 13:36:02.698624] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:08:10.014 INFO: Running with entropic power schedule (0xFF, 100). 00:08:10.014 INFO: Seed: 2571826650 00:08:10.014 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:10.014 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:10.014 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:08:10.014 INFO: A corpus is not provided, starting from an empty corpus 00:08:10.014 #2 INITED exec/s: 0 rss: 65Mb 00:08:10.014 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:10.014 This may also happen if the target rejected all inputs we tried so far 00:08:10.014 [2024-06-11 13:36:02.754568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.014 [2024-06-11 13:36:02.754604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.014 [2024-06-11 13:36:02.754676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.014 [2024-06-11 13:36:02.754694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.273 NEW_FUNC[1/687]: 0x491a60 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:08:10.273 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:10.273 #7 NEW cov: 11862 ft: 11859 corp: 2/20b lim: 40 exec/s: 0 rss: 72Mb L: 19/19 MS: 5 ChangeBit-CopyPart-ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:08:10.273 [2024-06-11 13:36:02.974999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a00006c cdw11:6c6c6c00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.273 [2024-06-11 13:36:02.975043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.273 #9 NEW cov: 11992 ft: 13236 corp: 3/28b lim: 40 exec/s: 0 rss: 72Mb L: 8/19 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:08:10.273 [2024-06-11 13:36:03.035225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.273 [2024-06-11 13:36:03.035258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.273 [2024-06-11 13:36:03.035332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.273 [2024-06-11 13:36:03.035349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.273 #10 NEW cov: 11998 ft: 13415 corp: 4/44b lim: 40 exec/s: 0 rss: 72Mb L: 16/19 MS: 1 EraseBytes- 00:08:10.273 [2024-06-11 13:36:03.105452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.273 [2024-06-11 13:36:03.105485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.273 [2024-06-11 13:36:03.105561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0000d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.273 [2024-06-11 13:36:03.105578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.273 #11 NEW cov: 12083 ft: 13642 corp: 5/65b lim: 40 exec/s: 0 rss: 72Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:08:10.532 [2024-06-11 13:36:03.185656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d97ed9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.532 [2024-06-11 13:36:03.185689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.532 [2024-06-11 13:36:03.185763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.532 [2024-06-11 13:36:03.185780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.532 #12 NEW cov: 12083 ft: 13783 corp: 6/85b lim: 40 exec/s: 0 rss: 72Mb L: 20/21 MS: 1 InsertByte- 00:08:10.532 [2024-06-11 13:36:03.235796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:7ed9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.532 [2024-06-11 13:36:03.235829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.532 [2024-06-11 13:36:03.235902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.532 [2024-06-11 13:36:03.235919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.532 #13 NEW cov: 12083 ft: 13838 corp: 7/105b lim: 40 exec/s: 0 rss: 72Mb L: 20/21 MS: 1 ShuffleBytes- 00:08:10.532 [2024-06-11 13:36:03.306002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:7ed9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.532 [2024-06-11 13:36:03.306035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.532 [2024-06-11 13:36:03.306108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d92d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.532 [2024-06-11 13:36:03.306124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.532 #14 NEW cov: 12083 ft: 13885 corp: 8/126b lim: 40 exec/s: 0 rss: 72Mb L: 21/21 MS: 1 InsertByte- 00:08:10.533 [2024-06-11 13:36:03.386441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d90ad9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.533 [2024-06-11 13:36:03.386473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.533 [2024-06-11 13:36:03.386548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d900d9 cdw11:d9d9d900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.533 [2024-06-11 13:36:03.386564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.533 [2024-06-11 13:36:03.386640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:6c6c6c6c cdw11:00d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.533 [2024-06-11 13:36:03.386657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.533 #15 NEW cov: 12083 ft: 14121 corp: 9/150b lim: 40 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 CrossOver- 00:08:10.533 [2024-06-11 13:36:03.436626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d90ad9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.533 [2024-06-11 13:36:03.436657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.533 [2024-06-11 13:36:03.436733] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d900d9 cdw11:d9d9d900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.533 [2024-06-11 13:36:03.436750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.533 [2024-06-11 13:36:03.436827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:6c6c6c6c cdw11:00d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.533 [2024-06-11 13:36:03.436844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.792 #16 NEW cov: 12083 ft: 14220 corp: 10/174b lim: 40 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 CrossOver- 00:08:10.792 [2024-06-11 13:36:03.516641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d8d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.792 [2024-06-11 13:36:03.516673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.792 [2024-06-11 13:36:03.516754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.792 [2024-06-11 13:36:03.516771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.792 #17 NEW cov: 12083 ft: 14289 corp: 11/190b lim: 40 exec/s: 0 rss: 72Mb L: 16/24 MS: 1 ChangeBit- 00:08:10.793 [2024-06-11 13:36:03.566737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:7ed9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.793 [2024-06-11 13:36:03.566769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.793 [2024-06-11 13:36:03.566843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d92d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.793 [2024-06-11 13:36:03.566860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.793 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:10.793 #18 NEW cov: 12106 ft: 14391 corp: 12/211b lim: 40 exec/s: 0 rss: 72Mb L: 21/24 MS: 1 ShuffleBytes- 00:08:10.793 [2024-06-11 13:36:03.646981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d97ed9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.793 [2024-06-11 13:36:03.647012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.793 [2024-06-11 13:36:03.647087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.793 [2024-06-11 13:36:03.647103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.793 #24 NEW cov: 12106 ft: 14410 corp: 13/231b lim: 40 exec/s: 0 rss: 72Mb L: 20/24 MS: 1 ShuffleBytes- 00:08:10.793 [2024-06-11 13:36:03.697557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.793 [2024-06-11 13:36:03.697588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:10.793 [2024-06-11 13:36:03.697664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.793 [2024-06-11 13:36:03.697680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:10.793 [2024-06-11 13:36:03.697750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d97ed9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.793 [2024-06-11 13:36:03.697766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:10.793 [2024-06-11 13:36:03.697841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d92dd9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:10.793 [2024-06-11 13:36:03.697857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.052 #25 NEW cov: 12106 ft: 14735 corp: 14/266b lim: 40 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:11.052 [2024-06-11 13:36:03.757779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.052 [2024-06-11 13:36:03.757810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.052 [2024-06-11 13:36:03.757891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff13ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.052 [2024-06-11 13:36:03.757908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.052 [2024-06-11 13:36:03.757982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d97ed9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.052 [2024-06-11 13:36:03.757998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.052 [2024-06-11 13:36:03.758072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d92dd9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.052 [2024-06-11 13:36:03.758088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.052 #26 NEW cov: 12106 ft: 14759 corp: 15/301b lim: 40 exec/s: 26 rss: 72Mb L: 35/35 MS: 1 ChangeByte- 00:08:11.052 [2024-06-11 13:36:03.837524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d8d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.052 [2024-06-11 13:36:03.837556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.052 [2024-06-11 13:36:03.837633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d921d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.052 [2024-06-11 13:36:03.837649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.052 #27 NEW cov: 12106 ft: 14778 corp: 16/318b lim: 40 exec/s: 27 rss: 72Mb L: 17/35 MS: 1 InsertByte- 00:08:11.053 [2024-06-11 13:36:03.908158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d97ed9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.053 [2024-06-11 13:36:03.908191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.053 [2024-06-11 13:36:03.908276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.053 [2024-06-11 13:36:03.908293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.053 [2024-06-11 13:36:03.908370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d908d9 cdw11:d9d97ed9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.053 [2024-06-11 13:36:03.908386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.053 [2024-06-11 13:36:03.908461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.053 [2024-06-11 13:36:03.908478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.312 #28 NEW cov: 12106 ft: 14782 corp: 17/355b lim: 40 exec/s: 28 rss: 73Mb L: 37/37 MS: 1 CopyPart- 00:08:11.312 [2024-06-11 13:36:03.988384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d92c2c cdw11:2c2c2c2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.312 [2024-06-11 13:36:03.988417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.312 [2024-06-11 13:36:03.988491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2c2c2c2c cdw11:2cd9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.312 [2024-06-11 13:36:03.988508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.312 [2024-06-11 13:36:03.988584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.312 [2024-06-11 13:36:03.988601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.312 [2024-06-11 13:36:03.988676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.312 [2024-06-11 13:36:03.988692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.312 #29 NEW cov: 12106 ft: 14802 corp: 18/387b lim: 40 exec/s: 29 rss: 73Mb L: 32/37 MS: 1 InsertRepeatedBytes- 00:08:11.312 [2024-06-11 13:36:04.068399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d97ed9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.312 [2024-06-11 13:36:04.068433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.312 [2024-06-11 13:36:04.068510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d97ed9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.312 [2024-06-11 13:36:04.068528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.312 [2024-06-11 13:36:04.068602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.312 [2024-06-11 13:36:04.068619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.312 #30 NEW cov: 12106 ft: 14878 corp: 19/415b lim: 40 exec/s: 30 rss: 73Mb L: 28/37 MS: 1 EraseBytes- 00:08:11.312 [2024-06-11 13:36:04.148613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.312 [2024-06-11 13:36:04.148643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.312 [2024-06-11 13:36:04.148721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.312 [2024-06-11 13:36:04.148738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.312 [2024-06-11 13:36:04.148813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d97ed9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.312 [2024-06-11 13:36:04.148830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.312 #31 NEW cov: 12106 ft: 14890 corp: 20/445b lim: 40 exec/s: 31 rss: 73Mb L: 30/37 MS: 1 EraseBytes- 00:08:11.312 [2024-06-11 13:36:04.198965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d92c2c cdw11:2c2c2c2c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.312 [2024-06-11 13:36:04.198997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.313 [2024-06-11 13:36:04.199074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:2c2c2c2c cdw11:2cd9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.313 [2024-06-11 13:36:04.199090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.313 [2024-06-11 13:36:04.199167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.313 [2024-06-11 13:36:04.199184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.313 [2024-06-11 13:36:04.199265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.313 [2024-06-11 13:36:04.199284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.572 #32 NEW cov: 12106 ft: 14926 corp: 21/477b lim: 40 exec/s: 32 rss: 73Mb L: 32/37 MS: 1 ChangeBit- 00:08:11.572 [2024-06-11 13:36:04.278759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d90003 cdw11:e5d0f2ec SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.572 [2024-06-11 13:36:04.278793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.572 [2024-06-11 13:36:04.278867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:db46d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.572 [2024-06-11 13:36:04.278883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.572 #33 NEW cov: 12106 ft: 14967 corp: 22/496b lim: 40 exec/s: 33 rss: 73Mb L: 19/37 MS: 1 CMP- DE: "\000\003\345\320\362\354\333F"- 00:08:11.572 [2024-06-11 13:36:04.329318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d97ed9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.572 [2024-06-11 13:36:04.329352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.572 [2024-06-11 13:36:04.329429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.572 [2024-06-11 13:36:04.329447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.572 [2024-06-11 13:36:04.329524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d908d9 cdw11:d9d97ed9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.572 [2024-06-11 13:36:04.329541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.572 [2024-06-11 13:36:04.329619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d932 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.572 [2024-06-11 13:36:04.329636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.572 #34 NEW cov: 12106 ft: 14973 corp: 23/533b lim: 40 exec/s: 34 rss: 73Mb L: 37/37 MS: 1 ChangeByte- 00:08:11.572 [2024-06-11 13:36:04.379113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d90100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.572 [2024-06-11 13:36:04.379146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.572 [2024-06-11 13:36:04.379221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000232 cdw11:f4bdd9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.572 [2024-06-11 13:36:04.379240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.572 #35 NEW cov: 12106 ft: 14977 corp: 24/549b lim: 40 exec/s: 35 rss: 73Mb L: 16/37 MS: 1 CMP- DE: "\001\000\000\000\0022\364\275"- 00:08:11.572 [2024-06-11 13:36:04.429615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d97ed9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.572 [2024-06-11 13:36:04.429648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.573 [2024-06-11 13:36:04.429725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.573 [2024-06-11 13:36:04.429746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.573 [2024-06-11 13:36:04.429817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d908d9 cdw11:d9d97ed9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.573 [2024-06-11 13:36:04.429834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.573 [2024-06-11 13:36:04.429906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d932 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.573 [2024-06-11 13:36:04.429923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.832 #36 NEW cov: 12106 ft: 15000 corp: 25/586b lim: 40 exec/s: 36 rss: 73Mb L: 37/37 MS: 1 ShuffleBytes- 00:08:11.832 [2024-06-11 13:36:04.510113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.510147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.832 [2024-06-11 13:36:04.510220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.510237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.832 [2024-06-11 13:36:04.510310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d97ed9 cdw11:d9e3e3e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.510327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.832 [2024-06-11 13:36:04.510400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e3e3e3e3 cdw11:e3e3e3d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.510416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:11.832 [2024-06-11 13:36:04.510490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:d9d92dd9 cdw11:d9d9083b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.510506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:11.832 #37 NEW cov: 12106 ft: 15089 corp: 26/626b lim: 40 exec/s: 37 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:08:11.832 [2024-06-11 13:36:04.589707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d1d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.589740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.832 [2024-06-11 13:36:04.589818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.589835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.832 #38 NEW cov: 12106 ft: 15133 corp: 27/642b lim: 40 exec/s: 38 rss: 73Mb L: 16/40 MS: 1 ChangeBinInt- 00:08:11.832 [2024-06-11 13:36:04.639996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d9d9d9 cdw11:d9d90ad9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.640028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.832 [2024-06-11 13:36:04.640108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d900d9 cdw11:d9d9d900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.640129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.832 [2024-06-11 13:36:04.640206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:f06c6c6c cdw11:6c00d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.640224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:11.832 #39 NEW cov: 12106 ft: 15165 corp: 28/667b lim: 40 exec/s: 39 rss: 73Mb L: 25/40 MS: 1 InsertByte- 00:08:11.832 [2024-06-11 13:36:04.690169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d9d932d9 cdw11:d97ed9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.690205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:11.832 [2024-06-11 13:36:04.690279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d9d9d9d9 cdw11:d97ed9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.690296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:11.832 [2024-06-11 13:36:04.690369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:d9d9d9d9 cdw11:d9d9d9d9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.832 [2024-06-11 13:36:04.690386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:12.092 #40 NEW cov: 12106 ft: 15179 corp: 29/695b lim: 40 exec/s: 20 rss: 73Mb L: 28/40 MS: 1 ChangeByte- 00:08:12.092 #40 DONE cov: 12106 ft: 15179 corp: 29/695b lim: 40 exec/s: 20 rss: 73Mb 00:08:12.092 ###### Recommended dictionary. ###### 00:08:12.092 "\000\003\345\320\362\354\333F" # Uses: 0 00:08:12.092 "\001\000\000\000\0022\364\275" # Uses: 0 00:08:12.092 ###### End of recommended dictionary. ###### 00:08:12.092 Done 40 runs in 2 second(s) 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4412 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:12.092 13:36:04 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:08:12.092 [2024-06-11 13:36:04.943506] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:12.092 [2024-06-11 13:36:04.943581] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3442161 ] 00:08:12.092 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.660 [2024-06-11 13:36:05.275087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.660 [2024-06-11 13:36:05.376620] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.660 [2024-06-11 13:36:05.440501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.660 [2024-06-11 13:36:05.456841] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:08:12.660 INFO: Running with entropic power schedule (0xFF, 100). 00:08:12.660 INFO: Seed: 1033826809 00:08:12.660 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:12.660 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:12.660 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:08:12.660 INFO: A corpus is not provided, starting from an empty corpus 00:08:12.660 #2 INITED exec/s: 0 rss: 64Mb 00:08:12.660 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:12.660 This may also happen if the target rejected all inputs we tried so far 00:08:12.660 [2024-06-11 13:36:05.516118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:07000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.660 [2024-06-11 13:36:05.516153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.919 NEW_FUNC[1/687]: 0x4937d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:08:12.919 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:12.919 #17 NEW cov: 11860 ft: 11853 corp: 2/11b lim: 40 exec/s: 0 rss: 71Mb L: 10/10 MS: 5 ShuffleBytes-InsertByte-InsertByte-EraseBytes-CMP- DE: "\007\000\000\000\000\000\000\000"- 00:08:12.919 [2024-06-11 13:36:05.726448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.919 [2024-06-11 13:36:05.726510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:12.919 #18 NEW cov: 11990 ft: 12330 corp: 3/21b lim: 40 exec/s: 0 rss: 72Mb L: 10/10 MS: 1 ChangeByte- 00:08:12.919 [2024-06-11 13:36:05.806808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.919 [2024-06-11 13:36:05.806841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.177 #19 NEW cov: 11996 ft: 12446 corp: 4/36b lim: 40 exec/s: 0 rss: 72Mb L: 15/15 MS: 1 CopyPart- 00:08:13.177 [2024-06-11 13:36:05.877020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0103e5d1 cdw11:bb63af98 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.177 [2024-06-11 13:36:05.877054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.177 #21 NEW cov: 12081 ft: 12682 corp: 5/46b lim: 40 exec/s: 0 rss: 72Mb L: 10/15 MS: 2 CopyPart-CMP- DE: "\001\003\345\321\273c\257\230"- 00:08:13.177 [2024-06-11 13:36:05.927154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:03ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.177 [2024-06-11 13:36:05.927186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.177 #22 NEW cov: 12081 ft: 13032 corp: 6/56b lim: 40 exec/s: 0 rss: 72Mb L: 10/15 MS: 1 ChangeBinInt- 00:08:13.177 [2024-06-11 13:36:05.977543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:07000000 cdw11:00070000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.177 [2024-06-11 13:36:05.977575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.177 [2024-06-11 13:36:05.977645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.177 [2024-06-11 13:36:05.977663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.177 #23 NEW cov: 12081 ft: 13807 corp: 7/74b lim: 40 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 PersAutoDict- DE: "\007\000\000\000\000\000\000\000"- 00:08:13.177 [2024-06-11 13:36:06.037494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4fcffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.177 [2024-06-11 13:36:06.037525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.434 #24 NEW cov: 12081 ft: 13879 corp: 8/89b lim: 40 exec/s: 0 rss: 72Mb L: 15/18 MS: 1 ChangeBinInt- 00:08:13.434 [2024-06-11 13:36:06.107844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:07000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.434 [2024-06-11 13:36:06.107876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.434 [2024-06-11 13:36:06.107944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00070000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.434 [2024-06-11 13:36:06.107961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.434 #25 NEW cov: 12081 ft: 13952 corp: 9/107b lim: 40 exec/s: 0 rss: 72Mb L: 18/18 MS: 1 ShuffleBytes- 00:08:13.434 [2024-06-11 13:36:06.187856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.434 [2024-06-11 13:36:06.187888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.434 #26 NEW cov: 12081 ft: 13970 corp: 10/117b lim: 40 exec/s: 0 rss: 72Mb L: 10/18 MS: 1 EraseBytes- 00:08:13.434 [2024-06-11 13:36:06.238222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4fcffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.434 [2024-06-11 13:36:06.238253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.434 [2024-06-11 13:36:06.238326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:00260a07 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.434 [2024-06-11 13:36:06.238342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.434 #27 NEW cov: 12081 ft: 14001 corp: 11/140b lim: 40 exec/s: 0 rss: 72Mb L: 23/23 MS: 1 PersAutoDict- DE: "\007\000\000\000\000\000\000\000"- 00:08:13.434 [2024-06-11 13:36:06.318913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.434 [2024-06-11 13:36:06.318944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.434 [2024-06-11 13:36:06.319016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.434 [2024-06-11 13:36:06.319032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.434 [2024-06-11 13:36:06.319104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.434 [2024-06-11 13:36:06.319121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.434 [2024-06-11 13:36:06.319190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffff00 cdw11:0000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.434 [2024-06-11 13:36:06.319213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.692 #28 NEW cov: 12081 ft: 14366 corp: 12/172b lim: 40 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:08:13.692 [2024-06-11 13:36:06.398458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:07000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.692 [2024-06-11 13:36:06.398490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.692 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:13.692 #29 NEW cov: 12104 ft: 14417 corp: 13/182b lim: 40 exec/s: 0 rss: 72Mb L: 10/32 MS: 1 CrossOver- 00:08:13.692 [2024-06-11 13:36:06.448638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:07000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.693 [2024-06-11 13:36:06.448669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.693 #30 NEW cov: 12104 ft: 14446 corp: 14/197b lim: 40 exec/s: 0 rss: 72Mb L: 15/32 MS: 1 CopyPart- 00:08:13.693 [2024-06-11 13:36:06.499335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.693 [2024-06-11 13:36:06.499368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.693 [2024-06-11 13:36:06.499437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.693 [2024-06-11 13:36:06.499454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.693 [2024-06-11 13:36:06.499520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.693 [2024-06-11 13:36:06.499536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:13.693 [2024-06-11 13:36:06.499606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.693 [2024-06-11 13:36:06.499623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:13.693 #31 NEW cov: 12104 ft: 14459 corp: 15/231b lim: 40 exec/s: 31 rss: 72Mb L: 34/34 MS: 1 CrossOver- 00:08:13.693 [2024-06-11 13:36:06.578923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4000000 cdw11:00008000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.693 [2024-06-11 13:36:06.578954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.951 #32 NEW cov: 12104 ft: 14514 corp: 16/241b lim: 40 exec/s: 32 rss: 72Mb L: 10/34 MS: 1 ChangeBit- 00:08:13.951 [2024-06-11 13:36:06.639098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:07000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.951 [2024-06-11 13:36:06.639128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.951 #33 NEW cov: 12104 ft: 14555 corp: 17/251b lim: 40 exec/s: 33 rss: 72Mb L: 10/34 MS: 1 ChangeBit- 00:08:13.951 [2024-06-11 13:36:06.689278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:07000000 cdw11:00007a00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.951 [2024-06-11 13:36:06.689309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.951 #34 NEW cov: 12104 ft: 14582 corp: 18/262b lim: 40 exec/s: 34 rss: 72Mb L: 11/34 MS: 1 InsertByte- 00:08:13.951 [2024-06-11 13:36:06.759488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:07000007 cdw11:0000260a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.951 [2024-06-11 13:36:06.759519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.951 #35 NEW cov: 12104 ft: 14603 corp: 19/277b lim: 40 exec/s: 35 rss: 73Mb L: 15/34 MS: 1 CrossOver- 00:08:13.951 [2024-06-11 13:36:06.830096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:07000007 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.951 [2024-06-11 13:36:06.830127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:13.951 [2024-06-11 13:36:06.830204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.951 [2024-06-11 13:36:06.830221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:13.951 [2024-06-11 13:36:06.830289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000026 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.951 [2024-06-11 13:36:06.830304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.210 #36 NEW cov: 12104 ft: 14807 corp: 20/302b lim: 40 exec/s: 36 rss: 73Mb L: 25/34 MS: 1 InsertRepeatedBytes- 00:08:14.210 [2024-06-11 13:36:06.890475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.210 [2024-06-11 13:36:06.890506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.210 [2024-06-11 13:36:06.890576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.210 [2024-06-11 13:36:06.890593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.210 [2024-06-11 13:36:06.890661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.210 [2024-06-11 13:36:06.890678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.210 [2024-06-11 13:36:06.890745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffff7070 cdw11:70707070 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.210 [2024-06-11 13:36:06.890760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.210 #37 NEW cov: 12104 ft: 14902 corp: 21/340b lim: 40 exec/s: 37 rss: 73Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:08:14.210 [2024-06-11 13:36:06.940019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0310ffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.210 [2024-06-11 13:36:06.940050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.210 #38 NEW cov: 12104 ft: 14912 corp: 22/351b lim: 40 exec/s: 38 rss: 73Mb L: 11/38 MS: 1 InsertByte- 00:08:14.210 [2024-06-11 13:36:07.010409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:07000007 cdw11:00900000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.210 [2024-06-11 13:36:07.010440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.210 [2024-06-11 13:36:07.010509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000260a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.210 [2024-06-11 13:36:07.010526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.210 #39 NEW cov: 12104 ft: 14927 corp: 23/367b lim: 40 exec/s: 39 rss: 73Mb L: 16/38 MS: 1 InsertByte- 00:08:14.210 [2024-06-11 13:36:07.060776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00007f6e cdw11:d6ffa369 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.210 [2024-06-11 13:36:07.060810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.210 [2024-06-11 13:36:07.060885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.210 [2024-06-11 13:36:07.060901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.210 [2024-06-11 13:36:07.060973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000026 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.210 [2024-06-11 13:36:07.060991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.210 #40 NEW cov: 12104 ft: 14943 corp: 24/392b lim: 40 exec/s: 40 rss: 73Mb L: 25/38 MS: 1 CMP- DE: "\000\000\177n\326\377\243i"- 00:08:14.468 [2024-06-11 13:36:07.140601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0103bb63 cdw11:afe5d1bb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.468 [2024-06-11 13:36:07.140634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.468 #41 NEW cov: 12104 ft: 14977 corp: 25/405b lim: 40 exec/s: 41 rss: 73Mb L: 13/38 MS: 1 CopyPart- 00:08:14.468 [2024-06-11 13:36:07.210794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.468 [2024-06-11 13:36:07.210826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.468 #42 NEW cov: 12104 ft: 15035 corp: 26/420b lim: 40 exec/s: 42 rss: 73Mb L: 15/38 MS: 1 CopyPart- 00:08:14.468 [2024-06-11 13:36:07.260962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4000103 cdw11:e5d1bb63 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.468 [2024-06-11 13:36:07.260995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.468 #43 NEW cov: 12104 ft: 15055 corp: 27/430b lim: 40 exec/s: 43 rss: 73Mb L: 10/38 MS: 1 PersAutoDict- DE: "\001\003\345\321\273c\257\230"- 00:08:14.468 [2024-06-11 13:36:07.311090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0103bb63 cdw11:afe5d13b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.468 [2024-06-11 13:36:07.311122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.468 #44 NEW cov: 12104 ft: 15078 corp: 28/443b lim: 40 exec/s: 44 rss: 73Mb L: 13/38 MS: 1 ChangeBit- 00:08:14.727 [2024-06-11 13:36:07.381931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:f4000000 cdw11:00ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.727 [2024-06-11 13:36:07.381968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.727 [2024-06-11 13:36:07.382039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ff00ff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.727 [2024-06-11 13:36:07.382056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:14.727 [2024-06-11 13:36:07.382125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.727 [2024-06-11 13:36:07.382141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:14.727 [2024-06-11 13:36:07.382217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.727 [2024-06-11 13:36:07.382233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:14.727 #45 NEW cov: 12104 ft: 15091 corp: 29/477b lim: 40 exec/s: 45 rss: 73Mb L: 34/38 MS: 1 CopyPart- 00:08:14.727 [2024-06-11 13:36:07.461547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:07000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:14.727 [2024-06-11 13:36:07.461578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:14.727 #46 NEW cov: 12104 ft: 15153 corp: 30/487b lim: 40 exec/s: 23 rss: 73Mb L: 10/38 MS: 1 ChangeBit- 00:08:14.727 #46 DONE cov: 12104 ft: 15153 corp: 30/487b lim: 40 exec/s: 23 rss: 73Mb 00:08:14.727 ###### Recommended dictionary. ###### 00:08:14.727 "\007\000\000\000\000\000\000\000" # Uses: 2 00:08:14.727 "\001\003\345\321\273c\257\230" # Uses: 1 00:08:14.727 "\000\000\177n\326\377\243i" # Uses: 0 00:08:14.727 ###### End of recommended dictionary. ###### 00:08:14.727 Done 46 runs in 2 second(s) 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4413 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:14.986 13:36:07 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:08:14.986 [2024-06-11 13:36:07.692163] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:14.986 [2024-06-11 13:36:07.692260] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3442599 ] 00:08:14.986 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.245 [2024-06-11 13:36:08.013154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.245 [2024-06-11 13:36:08.116792] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.504 [2024-06-11 13:36:08.180784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.504 [2024-06-11 13:36:08.197146] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:08:15.504 INFO: Running with entropic power schedule (0xFF, 100). 00:08:15.504 INFO: Seed: 3775837967 00:08:15.504 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:15.504 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:15.504 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:08:15.504 INFO: A corpus is not provided, starting from an empty corpus 00:08:15.504 #2 INITED exec/s: 0 rss: 64Mb 00:08:15.504 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:15.504 This may also happen if the target rejected all inputs we tried so far 00:08:15.504 [2024-06-11 13:36:08.265669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.504 [2024-06-11 13:36:08.265716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.504 [2024-06-11 13:36:08.265861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.504 [2024-06-11 13:36:08.265879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.504 [2024-06-11 13:36:08.266026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.504 [2024-06-11 13:36:08.266045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.504 [2024-06-11 13:36:08.266187] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.504 [2024-06-11 13:36:08.266206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.762 NEW_FUNC[1/686]: 0x495390 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:08:15.762 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:15.762 #5 NEW cov: 11848 ft: 11848 corp: 2/33b lim: 40 exec/s: 0 rss: 71Mb L: 32/32 MS: 3 InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:08:15.762 [2024-06-11 13:36:08.476269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.476306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.476404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.476419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.476508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.476523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.476609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.476623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.763 #6 NEW cov: 11978 ft: 12371 corp: 3/65b lim: 40 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:15.763 [2024-06-11 13:36:08.546823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.546852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.546944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.546958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.547049] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.547064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.547159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.547173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.763 #7 NEW cov: 11984 ft: 12665 corp: 4/102b lim: 40 exec/s: 0 rss: 72Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:08:15.763 [2024-06-11 13:36:08.596922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.596950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.597041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.597056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.597145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bfffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.597159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.597240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.597254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:15.763 #8 NEW cov: 12069 ft: 12950 corp: 5/134b lim: 40 exec/s: 0 rss: 72Mb L: 32/37 MS: 1 ChangeBit- 00:08:15.763 [2024-06-11 13:36:08.667345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.667371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.667476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.667491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.667588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.667603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:15.763 [2024-06-11 13:36:08.667699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:15.763 [2024-06-11 13:36:08.667714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.022 #9 NEW cov: 12069 ft: 13127 corp: 6/172b lim: 40 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 InsertByte- 00:08:16.022 [2024-06-11 13:36:08.727585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.727610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.727702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.727716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.727810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:f7ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.727824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.727914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.727927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.022 #10 NEW cov: 12069 ft: 13239 corp: 7/204b lim: 40 exec/s: 0 rss: 72Mb L: 32/38 MS: 1 ChangeBit- 00:08:16.022 [2024-06-11 13:36:08.777933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.777957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.778048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.778061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.778155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bfffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.778170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.778266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.778281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.022 #11 NEW cov: 12069 ft: 13298 corp: 8/238b lim: 40 exec/s: 0 rss: 72Mb L: 34/38 MS: 1 CMP- DE: "\000\000"- 00:08:16.022 [2024-06-11 13:36:08.848276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.848302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.848403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.848418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.848514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.848529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.848626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.848642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.022 #12 NEW cov: 12069 ft: 13394 corp: 9/277b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 CMP- DE: "\004\000"- 00:08:16.022 [2024-06-11 13:36:08.898746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.898772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.898863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.898879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.898966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bfffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.898981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.022 [2024-06-11 13:36:08.899074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.022 [2024-06-11 13:36:08.899089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.022 #13 NEW cov: 12069 ft: 13455 corp: 10/309b lim: 40 exec/s: 0 rss: 72Mb L: 32/39 MS: 1 CrossOver- 00:08:16.281 [2024-06-11 13:36:08.949126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:08.949151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.281 [2024-06-11 13:36:08.949248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:08.949261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.281 [2024-06-11 13:36:08.949347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:08.949365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.281 [2024-06-11 13:36:08.949462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:fffbff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:08.949476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.281 #14 NEW cov: 12069 ft: 13480 corp: 11/348b lim: 40 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 ChangeBinInt- 00:08:16.281 [2024-06-11 13:36:09.018983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:09.019009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.281 [2024-06-11 13:36:09.019099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:09.019114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.281 [2024-06-11 13:36:09.019204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bfffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:09.019218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.281 #15 NEW cov: 12069 ft: 14042 corp: 12/373b lim: 40 exec/s: 0 rss: 72Mb L: 25/39 MS: 1 EraseBytes- 00:08:16.281 [2024-06-11 13:36:09.089866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffff18 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:09.089894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.281 [2024-06-11 13:36:09.089989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:09.090004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.281 [2024-06-11 13:36:09.090106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffbfffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:09.090121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.281 [2024-06-11 13:36:09.090218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:09.090233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.281 #16 NEW cov: 12069 ft: 14069 corp: 13/406b lim: 40 exec/s: 0 rss: 72Mb L: 33/39 MS: 1 InsertByte- 00:08:16.281 [2024-06-11 13:36:09.140280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:09.140307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.281 [2024-06-11 13:36:09.140404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.281 [2024-06-11 13:36:09.140418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.282 [2024-06-11 13:36:09.140515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:f7ffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.282 [2024-06-11 13:36:09.140532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.282 [2024-06-11 13:36:09.140633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.282 [2024-06-11 13:36:09.140649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.282 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:16.282 #17 NEW cov: 12092 ft: 14097 corp: 14/442b lim: 40 exec/s: 0 rss: 72Mb L: 36/39 MS: 1 InsertRepeatedBytes- 00:08:16.540 [2024-06-11 13:36:09.210480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.540 [2024-06-11 13:36:09.210507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.540 [2024-06-11 13:36:09.210593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.540 [2024-06-11 13:36:09.210609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.540 [2024-06-11 13:36:09.210703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.540 [2024-06-11 13:36:09.210718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.540 [2024-06-11 13:36:09.210813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff31ff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.210826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.541 #18 NEW cov: 12092 ft: 14120 corp: 15/479b lim: 40 exec/s: 0 rss: 72Mb L: 37/39 MS: 1 ChangeByte- 00:08:16.541 [2024-06-11 13:36:09.260510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.260537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.541 [2024-06-11 13:36:09.260634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.260650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.541 [2024-06-11 13:36:09.260741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffbfff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.260756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.541 #19 NEW cov: 12092 ft: 14174 corp: 16/510b lim: 40 exec/s: 19 rss: 73Mb L: 31/39 MS: 1 InsertRepeatedBytes- 00:08:16.541 [2024-06-11 13:36:09.331196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.331224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.541 [2024-06-11 13:36:09.331324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.331337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.541 [2024-06-11 13:36:09.331425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffff0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.331442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.541 [2024-06-11 13:36:09.331534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0026ff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.331548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.541 #20 NEW cov: 12092 ft: 14234 corp: 17/548b lim: 40 exec/s: 20 rss: 73Mb L: 38/39 MS: 1 ChangeBinInt- 00:08:16.541 [2024-06-11 13:36:09.401652] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.401678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.541 [2024-06-11 13:36:09.401770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.401786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.541 [2024-06-11 13:36:09.401879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bfffffff cdw11:dfffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.401893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.541 [2024-06-11 13:36:09.401990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.402004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.541 #21 NEW cov: 12092 ft: 14257 corp: 18/580b lim: 40 exec/s: 21 rss: 73Mb L: 32/39 MS: 1 ChangeBit- 00:08:16.541 [2024-06-11 13:36:09.452096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.452120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.541 [2024-06-11 13:36:09.452212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.452246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.541 [2024-06-11 13:36:09.452337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.541 [2024-06-11 13:36:09.452352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.452445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.452461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.800 #22 NEW cov: 12092 ft: 14270 corp: 19/618b lim: 40 exec/s: 22 rss: 73Mb L: 38/39 MS: 1 CopyPart- 00:08:16.800 [2024-06-11 13:36:09.502443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffff2a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.502472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.502568] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.502583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.502684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.502699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.502799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.502813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.800 #23 NEW cov: 12092 ft: 14284 corp: 20/651b lim: 40 exec/s: 23 rss: 73Mb L: 33/39 MS: 1 InsertByte- 00:08:16.800 [2024-06-11 13:36:09.552813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffff18 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.552836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.552928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.552942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.553036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffbfffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.553050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.553142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.553155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.800 #24 NEW cov: 12092 ft: 14308 corp: 21/684b lim: 40 exec/s: 24 rss: 73Mb L: 33/39 MS: 1 ShuffleBytes- 00:08:16.800 [2024-06-11 13:36:09.623063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.623088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.623175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.623191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.623290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:fffffff7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.623304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.623392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.623410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:16.800 #30 NEW cov: 12092 ft: 14313 corp: 22/722b lim: 40 exec/s: 30 rss: 73Mb L: 38/39 MS: 1 ChangeBit- 00:08:16.800 [2024-06-11 13:36:09.673026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:fff7ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.673050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.673135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.673148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:16.800 [2024-06-11 13:36:09.673248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffbfff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:16.800 [2024-06-11 13:36:09.673264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:16.800 #31 NEW cov: 12092 ft: 14349 corp: 23/753b lim: 40 exec/s: 31 rss: 73Mb L: 31/39 MS: 1 ChangeBinInt- 00:08:17.059 [2024-06-11 13:36:09.733646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24fffdff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.733672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.059 [2024-06-11 13:36:09.733768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.733783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.059 [2024-06-11 13:36:09.733881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.733894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.059 [2024-06-11 13:36:09.733985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.733998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.059 #32 NEW cov: 12092 ft: 14352 corp: 24/785b lim: 40 exec/s: 32 rss: 73Mb L: 32/39 MS: 1 ChangeBit- 00:08:17.059 [2024-06-11 13:36:09.783888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.783920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.059 [2024-06-11 13:36:09.784010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.784025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.059 [2024-06-11 13:36:09.784123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.784138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.059 [2024-06-11 13:36:09.784231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.784248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.059 #33 NEW cov: 12092 ft: 14357 corp: 25/820b lim: 40 exec/s: 33 rss: 73Mb L: 35/39 MS: 1 CopyPart- 00:08:17.059 [2024-06-11 13:36:09.834460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.834486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.059 [2024-06-11 13:36:09.834583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.834598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.059 [2024-06-11 13:36:09.834694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.834710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.059 [2024-06-11 13:36:09.834807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.059 [2024-06-11 13:36:09.834822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.059 #34 NEW cov: 12092 ft: 14369 corp: 26/857b lim: 40 exec/s: 34 rss: 73Mb L: 37/39 MS: 1 PersAutoDict- DE: "\000\000"- 00:08:17.060 [2024-06-11 13:36:09.904326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.060 [2024-06-11 13:36:09.904353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.060 [2024-06-11 13:36:09.904454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ff000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.060 [2024-06-11 13:36:09.904471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.060 #35 NEW cov: 12092 ft: 14589 corp: 27/877b lim: 40 exec/s: 35 rss: 73Mb L: 20/39 MS: 1 EraseBytes- 00:08:17.060 [2024-06-11 13:36:09.955332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.060 [2024-06-11 13:36:09.955358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.060 [2024-06-11 13:36:09.955456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.060 [2024-06-11 13:36:09.955471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.060 [2024-06-11 13:36:09.955578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.060 [2024-06-11 13:36:09.955594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.060 [2024-06-11 13:36:09.955689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff00 cdw11:20ffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.060 [2024-06-11 13:36:09.955704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.005735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.005764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.005863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.005880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.005978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.005994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.006081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff00 cdw11:20ffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.006097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.319 #37 NEW cov: 12092 ft: 14608 corp: 28/909b lim: 40 exec/s: 37 rss: 73Mb L: 32/39 MS: 2 ChangeBinInt-ChangeByte- 00:08:17.319 [2024-06-11 13:36:10.056176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffff7f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.056211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.056303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.056317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.056415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.056430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.056532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ff31ff00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.056547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.319 #38 NEW cov: 12092 ft: 14642 corp: 29/946b lim: 40 exec/s: 38 rss: 74Mb L: 37/39 MS: 1 ChangeBit- 00:08:17.319 [2024-06-11 13:36:10.126412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.126442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.126536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.126552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.126641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.126656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.126751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff04 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.126769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.319 #39 NEW cov: 12092 ft: 14704 corp: 30/978b lim: 40 exec/s: 39 rss: 74Mb L: 32/39 MS: 1 ShuffleBytes- 00:08:17.319 [2024-06-11 13:36:10.176763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.176790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.176878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffaeffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.176893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.176987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.177002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.319 [2024-06-11 13:36:10.177089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffff00 cdw11:20ffffae SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.319 [2024-06-11 13:36:10.177105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.319 #40 NEW cov: 12092 ft: 14712 corp: 31/1013b lim: 40 exec/s: 40 rss: 74Mb L: 35/39 MS: 1 CopyPart- 00:08:17.578 [2024-06-11 13:36:10.247264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:24ffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.578 [2024-06-11 13:36:10.247290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:17.578 [2024-06-11 13:36:10.247385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.578 [2024-06-11 13:36:10.247398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:17.578 [2024-06-11 13:36:10.247492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:bfffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.578 [2024-06-11 13:36:10.247508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:17.578 [2024-06-11 13:36:10.247598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:17.578 [2024-06-11 13:36:10.247613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:17.578 #41 NEW cov: 12092 ft: 14723 corp: 32/1047b lim: 40 exec/s: 20 rss: 74Mb L: 34/39 MS: 1 ShuffleBytes- 00:08:17.578 #41 DONE cov: 12092 ft: 14723 corp: 32/1047b lim: 40 exec/s: 20 rss: 74Mb 00:08:17.578 ###### Recommended dictionary. ###### 00:08:17.578 "\000\000" # Uses: 1 00:08:17.578 "\004\000" # Uses: 0 00:08:17.578 ###### End of recommended dictionary. ###### 00:08:17.578 Done 41 runs in 2 second(s) 00:08:17.578 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:08:17.578 13:36:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:17.578 13:36:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:17.578 13:36:10 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:08:17.578 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:08:17.578 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:17.578 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4414 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:17.579 13:36:10 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:08:17.579 [2024-06-11 13:36:10.451597] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:17.579 [2024-06-11 13:36:10.451643] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3443428 ] 00:08:17.579 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.837 [2024-06-11 13:36:10.623691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.837 [2024-06-11 13:36:10.708125] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.095 [2024-06-11 13:36:10.772013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.095 [2024-06-11 13:36:10.788398] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:08:18.095 INFO: Running with entropic power schedule (0xFF, 100). 00:08:18.095 INFO: Seed: 2069862336 00:08:18.095 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:18.095 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:18.095 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:08:18.095 INFO: A corpus is not provided, starting from an empty corpus 00:08:18.095 #2 INITED exec/s: 0 rss: 65Mb 00:08:18.095 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:18.095 This may also happen if the target rejected all inputs we tried so far 00:08:18.095 [2024-06-11 13:36:10.838124] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.095 [2024-06-11 13:36:10.838160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.095 [2024-06-11 13:36:10.838239] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.095 [2024-06-11 13:36:10.838257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.095 [2024-06-11 13:36:10.838329] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.095 [2024-06-11 13:36:10.838347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.095 [2024-06-11 13:36:10.838425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.095 [2024-06-11 13:36:10.838441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.095 [2024-06-11 13:36:10.838514] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.095 [2024-06-11 13:36:10.838530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.354 NEW_FUNC[1/687]: 0x496f50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:08:18.354 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:18.354 #52 NEW cov: 11842 ft: 11842 corp: 2/36b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 5 ChangeBit-ChangeBit-ChangeByte-InsertByte-InsertRepeatedBytes- 00:08:18.354 [2024-06-11 13:36:11.048102] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.354 [2024-06-11 13:36:11.048147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.354 [2024-06-11 13:36:11.048227] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.354 [2024-06-11 13:36:11.048248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.354 #55 NEW cov: 11979 ft: 12929 corp: 3/56b lim: 35 exec/s: 0 rss: 72Mb L: 20/35 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:08:18.354 [2024-06-11 13:36:11.108923] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.354 [2024-06-11 13:36:11.108958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.354 [2024-06-11 13:36:11.109035] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.354 [2024-06-11 13:36:11.109052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.354 [2024-06-11 13:36:11.109128] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.354 [2024-06-11 13:36:11.109150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.354 [2024-06-11 13:36:11.109224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.354 [2024-06-11 13:36:11.109241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.354 [2024-06-11 13:36:11.109318] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.354 [2024-06-11 13:36:11.109334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.354 #61 NEW cov: 11985 ft: 13159 corp: 4/91b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:18.354 [2024-06-11 13:36:11.188428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.354 [2024-06-11 13:36:11.188467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.354 [2024-06-11 13:36:11.188544] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.354 [2024-06-11 13:36:11.188570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.354 #62 NEW cov: 12070 ft: 13398 corp: 5/111b lim: 35 exec/s: 0 rss: 72Mb L: 20/35 MS: 1 CopyPart- 00:08:18.612 [2024-06-11 13:36:11.268640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.268675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.613 [2024-06-11 13:36:11.268755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.268775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.613 #63 NEW cov: 12070 ft: 13512 corp: 6/131b lim: 35 exec/s: 0 rss: 72Mb L: 20/35 MS: 1 ShuffleBytes- 00:08:18.613 [2024-06-11 13:36:11.349525] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.349559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.613 [2024-06-11 13:36:11.349634] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000038 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.349654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.613 [2024-06-11 13:36:11.349730] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.349762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.613 [2024-06-11 13:36:11.349838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.349855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.613 [2024-06-11 13:36:11.349930] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.349947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.613 #64 NEW cov: 12070 ft: 13573 corp: 7/166b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CMP- DE: "\202\371\0168\335\177\000\000"- 00:08:18.613 [2024-06-11 13:36:11.429839] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.429873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.613 [2024-06-11 13:36:11.429951] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000038 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.429971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.613 [2024-06-11 13:36:11.430049] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.430068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.613 [2024-06-11 13:36:11.430144] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.430161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.613 [2024-06-11 13:36:11.430246] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.430263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:18.613 #65 NEW cov: 12070 ft: 13656 corp: 8/201b lim: 35 exec/s: 0 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:08:18.613 [2024-06-11 13:36:11.509560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.509593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.613 [2024-06-11 13:36:11.509672] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES TIMESTAMP cid:5 cdw10:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.509689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.613 [2024-06-11 13:36:11.509767] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.613 [2024-06-11 13:36:11.509787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.871 #66 NEW cov: 12070 ft: 13856 corp: 9/222b lim: 35 exec/s: 0 rss: 72Mb L: 21/35 MS: 1 CrossOver- 00:08:18.871 [2024-06-11 13:36:11.559647] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.559682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.871 [2024-06-11 13:36:11.559763] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.559783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.871 [2024-06-11 13:36:11.559863] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.559882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.871 #67 NEW cov: 12070 ft: 13908 corp: 10/243b lim: 35 exec/s: 0 rss: 72Mb L: 21/35 MS: 1 InsertByte- 00:08:18.871 [2024-06-11 13:36:11.609602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.609636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.871 [2024-06-11 13:36:11.609713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.609732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.871 #68 NEW cov: 12070 ft: 13964 corp: 11/263b lim: 35 exec/s: 0 rss: 72Mb L: 20/35 MS: 1 ChangeBinInt- 00:08:18.871 [2024-06-11 13:36:11.659487] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.659521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.871 #69 NEW cov: 12070 ft: 14675 corp: 12/275b lim: 35 exec/s: 0 rss: 72Mb L: 12/35 MS: 1 EraseBytes- 00:08:18.871 [2024-06-11 13:36:11.720369] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.720405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.871 [2024-06-11 13:36:11.720485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.720506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:18.871 [2024-06-11 13:36:11.720579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.720599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:18.871 [2024-06-11 13:36:11.720674] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.720693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:18.871 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:18.871 #70 NEW cov: 12093 ft: 14775 corp: 13/306b lim: 35 exec/s: 0 rss: 72Mb L: 31/35 MS: 1 CopyPart- 00:08:18.871 [2024-06-11 13:36:11.770034] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.770068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:18.871 [2024-06-11 13:36:11.770144] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:18.871 [2024-06-11 13:36:11.770163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.129 #71 NEW cov: 12093 ft: 14848 corp: 14/326b lim: 35 exec/s: 0 rss: 72Mb L: 20/35 MS: 1 ChangeBit- 00:08:19.129 [2024-06-11 13:36:11.820878] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:11.820910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.129 [2024-06-11 13:36:11.820984] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:11.821002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.129 [2024-06-11 13:36:11.821077] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:11.821096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.129 [2024-06-11 13:36:11.821170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:11.821186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.129 [2024-06-11 13:36:11.821266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:11.821283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:19.129 #72 NEW cov: 12093 ft: 14865 corp: 15/361b lim: 35 exec/s: 72 rss: 72Mb L: 35/35 MS: 1 CopyPart- 00:08:19.129 [2024-06-11 13:36:11.870080] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:11.870114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.129 #73 NEW cov: 12093 ft: 14886 corp: 16/374b lim: 35 exec/s: 73 rss: 72Mb L: 13/35 MS: 1 InsertByte- 00:08:19.129 [2024-06-11 13:36:11.951006] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:11.951041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.129 [2024-06-11 13:36:11.951120] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:11.951141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.129 [2024-06-11 13:36:11.951219] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:11.951238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.129 [2024-06-11 13:36:11.951313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000007f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:11.951329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.129 #74 NEW cov: 12093 ft: 14921 corp: 17/402b lim: 35 exec/s: 74 rss: 72Mb L: 28/35 MS: 1 PersAutoDict- DE: "\202\371\0168\335\177\000\000"- 00:08:19.129 [2024-06-11 13:36:12.031023] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:12.031058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.129 [2024-06-11 13:36:12.031139] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:12.031159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.129 [2024-06-11 13:36:12.031242] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.129 [2024-06-11 13:36:12.031262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.386 #80 NEW cov: 12093 ft: 15004 corp: 18/426b lim: 35 exec/s: 80 rss: 72Mb L: 24/35 MS: 1 CopyPart- 00:08:19.386 [2024-06-11 13:36:12.111183] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.386 [2024-06-11 13:36:12.111223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.386 [2024-06-11 13:36:12.111300] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.386 [2024-06-11 13:36:12.111322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.386 [2024-06-11 13:36:12.111396] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.386 [2024-06-11 13:36:12.111416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.386 #81 NEW cov: 12093 ft: 15036 corp: 19/447b lim: 35 exec/s: 81 rss: 73Mb L: 21/35 MS: 1 InsertByte- 00:08:19.386 NEW_FUNC[1/2]: 0x4b8410 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:08:19.386 NEW_FUNC[2/2]: 0x11e9190 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1763 00:08:19.386 #84 NEW cov: 12126 ft: 15088 corp: 20/455b lim: 35 exec/s: 84 rss: 73Mb L: 8/35 MS: 3 CopyPart-CopyPart-InsertRepeatedBytes- 00:08:19.386 [2024-06-11 13:36:12.221770] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.386 [2024-06-11 13:36:12.221805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.386 [2024-06-11 13:36:12.221883] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000038 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.386 [2024-06-11 13:36:12.221903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.386 [2024-06-11 13:36:12.221978] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.386 [2024-06-11 13:36:12.221998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.386 [2024-06-11 13:36:12.222076] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.386 [2024-06-11 13:36:12.222092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.386 #85 NEW cov: 12126 ft: 15151 corp: 21/484b lim: 35 exec/s: 85 rss: 73Mb L: 29/35 MS: 1 EraseBytes- 00:08:19.644 [2024-06-11 13:36:12.301320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST RESERVE MASK cid:4 cdw10:80000082 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.644 [2024-06-11 13:36:12.301355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.644 NEW_FUNC[1/1]: 0x4bd290 in feat_rsv_notification_mask /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:378 00:08:19.644 #86 NEW cov: 12154 ft: 15194 corp: 22/492b lim: 35 exec/s: 86 rss: 73Mb L: 8/35 MS: 1 PersAutoDict- DE: "\202\371\0168\335\177\000\000"- 00:08:19.644 [2024-06-11 13:36:12.382240] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.644 [2024-06-11 13:36:12.382275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.644 [2024-06-11 13:36:12.382351] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.644 [2024-06-11 13:36:12.382371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.644 [2024-06-11 13:36:12.382447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.644 [2024-06-11 13:36:12.382467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.644 [2024-06-11 13:36:12.382543] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:0000007f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.644 [2024-06-11 13:36:12.382559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.644 #87 NEW cov: 12154 ft: 15198 corp: 23/520b lim: 35 exec/s: 87 rss: 73Mb L: 28/35 MS: 1 ChangeASCIIInt- 00:08:19.644 [2024-06-11 13:36:12.461983] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.644 [2024-06-11 13:36:12.462018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.644 [2024-06-11 13:36:12.462094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.644 [2024-06-11 13:36:12.462114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.644 #88 NEW cov: 12154 ft: 15218 corp: 24/540b lim: 35 exec/s: 88 rss: 73Mb L: 20/35 MS: 1 ChangeByte- 00:08:19.644 [2024-06-11 13:36:12.512614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.644 [2024-06-11 13:36:12.512650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.644 [2024-06-11 13:36:12.512731] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.644 [2024-06-11 13:36:12.512751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.644 [2024-06-11 13:36:12.512829] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.644 [2024-06-11 13:36:12.512849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.644 [2024-06-11 13:36:12.512925] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.644 [2024-06-11 13:36:12.512944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.901 #89 NEW cov: 12154 ft: 15234 corp: 25/574b lim: 35 exec/s: 89 rss: 73Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:08:19.901 [2024-06-11 13:36:12.593062] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.593095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.593173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000038 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.593193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.593280] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.593300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.593377] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.593394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.593473] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.593489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:19.901 #90 NEW cov: 12154 ft: 15280 corp: 26/609b lim: 35 exec/s: 90 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:08:19.901 [2024-06-11 13:36:12.643187] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.643225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.643308] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.643325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.643403] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.643423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.643506] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.643523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.643600] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.643617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:19.901 #91 NEW cov: 12154 ft: 15294 corp: 27/644b lim: 35 exec/s: 91 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:08:19.901 [2024-06-11 13:36:12.723206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.723241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.723321] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.723342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.723428] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.723448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.723527] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.723543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:19.901 #92 NEW cov: 12154 ft: 15299 corp: 28/678b lim: 35 exec/s: 92 rss: 74Mb L: 34/35 MS: 1 InsertRepeatedBytes- 00:08:19.901 [2024-06-11 13:36:12.803230] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.803267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.803350] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.803369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:19.901 [2024-06-11 13:36:12.803447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ea SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:19.901 [2024-06-11 13:36:12.803466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.160 #93 NEW cov: 12154 ft: 15311 corp: 29/702b lim: 35 exec/s: 46 rss: 74Mb L: 24/35 MS: 1 ShuffleBytes- 00:08:20.160 #93 DONE cov: 12154 ft: 15311 corp: 29/702b lim: 35 exec/s: 46 rss: 74Mb 00:08:20.160 ###### Recommended dictionary. ###### 00:08:20.160 "\202\371\0168\335\177\000\000" # Uses: 2 00:08:20.160 ###### End of recommended dictionary. ###### 00:08:20.160 Done 93 runs in 2 second(s) 00:08:20.160 13:36:12 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4415 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:20.160 13:36:13 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:08:20.160 [2024-06-11 13:36:13.042768] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:20.160 [2024-06-11 13:36:13.042845] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3443862 ] 00:08:20.419 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.419 [2024-06-11 13:36:13.255290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.678 [2024-06-11 13:36:13.338929] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.678 [2024-06-11 13:36:13.402838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.678 [2024-06-11 13:36:13.419216] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:08:20.678 INFO: Running with entropic power schedule (0xFF, 100). 00:08:20.678 INFO: Seed: 406899462 00:08:20.678 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:20.678 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:20.678 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:08:20.679 INFO: A corpus is not provided, starting from an empty corpus 00:08:20.679 #2 INITED exec/s: 0 rss: 65Mb 00:08:20.679 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:20.679 This may also happen if the target rejected all inputs we tried so far 00:08:20.679 [2024-06-11 13:36:13.475477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.679 [2024-06-11 13:36:13.475515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.679 [2024-06-11 13:36:13.475599] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.679 [2024-06-11 13:36:13.475617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.679 [2024-06-11 13:36:13.475694] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.679 [2024-06-11 13:36:13.475711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.679 [2024-06-11 13:36:13.475792] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.679 [2024-06-11 13:36:13.475809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:20.938 NEW_FUNC[1/686]: 0x498490 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:08:20.938 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:20.938 #16 NEW cov: 11823 ft: 11823 corp: 2/33b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 4 ChangeBit-CrossOver-ChangeBit-InsertRepeatedBytes- 00:08:20.938 [2024-06-11 13:36:13.685895] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.685938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.938 [2024-06-11 13:36:13.686018] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.686035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.938 [2024-06-11 13:36:13.686111] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.686128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.938 [2024-06-11 13:36:13.686205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.686223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:20.938 #17 NEW cov: 11960 ft: 12555 corp: 3/65b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeBit- 00:08:20.938 [2024-06-11 13:36:13.766053] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000042f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.766086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.938 [2024-06-11 13:36:13.766162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.766181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.938 [2024-06-11 13:36:13.766265] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.766283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.938 [2024-06-11 13:36:13.766361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.766377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:20.938 #18 NEW cov: 11966 ft: 12722 corp: 4/97b lim: 35 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeByte- 00:08:20.938 [2024-06-11 13:36:13.816206] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.816239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:20.938 [2024-06-11 13:36:13.816320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.816338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:20.938 [2024-06-11 13:36:13.816425] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.816444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:20.938 [2024-06-11 13:36:13.816522] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:20.938 [2024-06-11 13:36:13.816539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.197 #19 NEW cov: 12051 ft: 12930 corp: 5/131b lim: 35 exec/s: 0 rss: 72Mb L: 34/34 MS: 1 CrossOver- 00:08:21.197 [2024-06-11 13:36:13.866406] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000001a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:13.866443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.197 [2024-06-11 13:36:13.866530] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:13.866550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.197 [2024-06-11 13:36:13.866630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:13.866647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.197 [2024-06-11 13:36:13.866723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:13.866741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.197 #20 NEW cov: 12051 ft: 12994 corp: 6/164b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 InsertByte- 00:08:21.197 [2024-06-11 13:36:13.936582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000001a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:13.936616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.197 [2024-06-11 13:36:13.936692] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:13.936710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.197 [2024-06-11 13:36:13.936786] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:13.936811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.197 [2024-06-11 13:36:13.936886] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:13.936903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.197 #21 NEW cov: 12051 ft: 13030 corp: 7/197b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 CopyPart- 00:08:21.197 [2024-06-11 13:36:14.006740] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:14.006773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.197 [2024-06-11 13:36:14.006848] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:14.006866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.197 [2024-06-11 13:36:14.006948] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:14.006965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.197 [2024-06-11 13:36:14.007040] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.197 [2024-06-11 13:36:14.007057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.197 #22 NEW cov: 12051 ft: 13079 corp: 8/229b lim: 35 exec/s: 0 rss: 72Mb L: 32/34 MS: 1 ChangeByte- 00:08:21.197 [2024-06-11 13:36:14.056906] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.198 [2024-06-11 13:36:14.056939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.198 [2024-06-11 13:36:14.057015] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.198 [2024-06-11 13:36:14.057033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.198 [2024-06-11 13:36:14.057109] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.198 [2024-06-11 13:36:14.057126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.198 [2024-06-11 13:36:14.057205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.198 [2024-06-11 13:36:14.057224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.198 #23 NEW cov: 12051 ft: 13210 corp: 9/261b lim: 35 exec/s: 0 rss: 72Mb L: 32/34 MS: 1 ShuffleBytes- 00:08:21.198 [2024-06-11 13:36:14.106706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.198 [2024-06-11 13:36:14.106740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.198 [2024-06-11 13:36:14.106830] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.198 [2024-06-11 13:36:14.106855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.456 #24 NEW cov: 12051 ft: 13806 corp: 10/281b lim: 35 exec/s: 0 rss: 72Mb L: 20/34 MS: 1 EraseBytes- 00:08:21.456 [2024-06-11 13:36:14.167195] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000011a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.167235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.167327] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.167352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.167445] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.167468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.167558] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.167582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.456 #25 NEW cov: 12051 ft: 13878 corp: 11/314b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ChangeBit- 00:08:21.456 [2024-06-11 13:36:14.237448] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000001a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.237481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.237573] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.237598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.237691] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.237715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.237808] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.237828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.456 #26 NEW cov: 12051 ft: 13895 corp: 12/347b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ShuffleBytes- 00:08:21.456 [2024-06-11 13:36:14.287592] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.287625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.287715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.287740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.287829] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.287852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.287942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.287962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.456 #29 NEW cov: 12051 ft: 13926 corp: 13/379b lim: 35 exec/s: 0 rss: 72Mb L: 32/34 MS: 3 InsertByte-ShuffleBytes-CrossOver- 00:08:21.456 [2024-06-11 13:36:14.337768] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000001a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.337800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.337892] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.337917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.338007] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.338029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.456 [2024-06-11 13:36:14.338119] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.456 [2024-06-11 13:36:14.338147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.715 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:21.715 #30 NEW cov: 12074 ft: 13995 corp: 14/412b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 ShuffleBytes- 00:08:21.715 [2024-06-11 13:36:14.387920] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000042f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.715 [2024-06-11 13:36:14.387954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.715 [2024-06-11 13:36:14.388044] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.715 [2024-06-11 13:36:14.388070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.715 [2024-06-11 13:36:14.388160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.715 [2024-06-11 13:36:14.388184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.715 [2024-06-11 13:36:14.388282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.715 [2024-06-11 13:36:14.388303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.715 #31 NEW cov: 12074 ft: 14030 corp: 15/445b lim: 35 exec/s: 0 rss: 72Mb L: 33/34 MS: 1 InsertByte- 00:08:21.715 [2024-06-11 13:36:14.458113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.715 [2024-06-11 13:36:14.458145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.715 [2024-06-11 13:36:14.458240] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.715 [2024-06-11 13:36:14.458267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.716 [2024-06-11 13:36:14.458359] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.716 [2024-06-11 13:36:14.458383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.716 [2024-06-11 13:36:14.458476] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.716 [2024-06-11 13:36:14.458497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.716 #32 NEW cov: 12074 ft: 14038 corp: 16/477b lim: 35 exec/s: 32 rss: 72Mb L: 32/34 MS: 1 ChangeBinInt- 00:08:21.716 [2024-06-11 13:36:14.508247] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.716 [2024-06-11 13:36:14.508281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.716 [2024-06-11 13:36:14.508372] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.716 [2024-06-11 13:36:14.508397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.716 [2024-06-11 13:36:14.508490] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.716 [2024-06-11 13:36:14.508514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.716 [2024-06-11 13:36:14.508611] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.716 [2024-06-11 13:36:14.508632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.716 #33 NEW cov: 12074 ft: 14072 corp: 17/511b lim: 35 exec/s: 33 rss: 72Mb L: 34/34 MS: 1 ChangeByte- 00:08:21.716 [2024-06-11 13:36:14.578404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.716 [2024-06-11 13:36:14.578437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.716 [2024-06-11 13:36:14.578529] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.716 [2024-06-11 13:36:14.578554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.716 [2024-06-11 13:36:14.578647] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.716 [2024-06-11 13:36:14.578669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.716 [2024-06-11 13:36:14.578760] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.716 [2024-06-11 13:36:14.578780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.716 #34 NEW cov: 12074 ft: 14078 corp: 18/543b lim: 35 exec/s: 34 rss: 72Mb L: 32/34 MS: 1 ChangeBinInt- 00:08:21.974 [2024-06-11 13:36:14.628589] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.628622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.628713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.628738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.628831] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.628854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.628943] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.628963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.975 #35 NEW cov: 12074 ft: 14100 corp: 19/577b lim: 35 exec/s: 35 rss: 72Mb L: 34/34 MS: 1 CopyPart- 00:08:21.975 [2024-06-11 13:36:14.678722] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000001a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.678754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.678845] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.678870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.678959] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.678986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.679079] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.679099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.975 #36 NEW cov: 12074 ft: 14121 corp: 20/610b lim: 35 exec/s: 36 rss: 72Mb L: 33/34 MS: 1 ChangeBit- 00:08:21.975 [2024-06-11 13:36:14.749132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.749164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.749260] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.749285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.749374] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.749398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.749486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.749506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.749599] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.749619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:21.975 #37 NEW cov: 12074 ft: 14177 corp: 21/645b lim: 35 exec/s: 37 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:08:21.975 [2024-06-11 13:36:14.819142] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.819175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.819269] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.819295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.819387] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.819411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:21.975 [2024-06-11 13:36:14.819501] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:21.975 [2024-06-11 13:36:14.819522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:21.975 #38 NEW cov: 12074 ft: 14190 corp: 22/677b lim: 35 exec/s: 38 rss: 73Mb L: 32/35 MS: 1 CopyPart- 00:08:22.234 [2024-06-11 13:36:14.889371] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000011d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:14.889404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.234 [2024-06-11 13:36:14.889496] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:14.889527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.234 [2024-06-11 13:36:14.889618] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:14.889639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.234 [2024-06-11 13:36:14.889731] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:14.889752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.234 #39 NEW cov: 12074 ft: 14203 corp: 23/709b lim: 35 exec/s: 39 rss: 73Mb L: 32/35 MS: 1 ChangeByte- 00:08:22.234 [2024-06-11 13:36:14.959050] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000001a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:14.959082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.234 #41 NEW cov: 12074 ft: 14554 corp: 24/716b lim: 35 exec/s: 41 rss: 73Mb L: 7/35 MS: 2 CrossOver-CopyPart- 00:08:22.234 [2024-06-11 13:36:15.039761] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:15.039793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.234 [2024-06-11 13:36:15.039885] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:15.039910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.234 [2024-06-11 13:36:15.040000] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:15.040022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.234 [2024-06-11 13:36:15.040113] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:15.040133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.234 #42 NEW cov: 12074 ft: 14568 corp: 25/749b lim: 35 exec/s: 42 rss: 73Mb L: 33/35 MS: 1 InsertByte- 00:08:22.234 [2024-06-11 13:36:15.089911] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:15.089943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.234 [2024-06-11 13:36:15.090033] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:15.090057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.234 [2024-06-11 13:36:15.090149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:15.090173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.234 [2024-06-11 13:36:15.090269] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.234 [2024-06-11 13:36:15.090294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.493 #43 NEW cov: 12074 ft: 14605 corp: 26/781b lim: 35 exec/s: 43 rss: 73Mb L: 32/35 MS: 1 ChangeBit- 00:08:22.493 [2024-06-11 13:36:15.170159] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.170191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.170290] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.170315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.170409] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.170433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.170524] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.170544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.493 #44 NEW cov: 12074 ft: 14623 corp: 27/814b lim: 35 exec/s: 44 rss: 73Mb L: 33/35 MS: 1 InsertByte- 00:08:22.493 [2024-06-11 13:36:15.220322] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.220357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.220452] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000068b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.220477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.220567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.220592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.220680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.220700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.493 #45 NEW cov: 12074 ft: 14628 corp: 28/847b lim: 35 exec/s: 45 rss: 73Mb L: 33/35 MS: 1 ChangeByte- 00:08:22.493 [2024-06-11 13:36:15.290518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000041d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.290552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.290640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000427 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.290665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.290756] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.290779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.290871] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.290891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.493 #46 NEW cov: 12074 ft: 14635 corp: 29/880b lim: 35 exec/s: 46 rss: 73Mb L: 33/35 MS: 1 InsertByte- 00:08:22.493 [2024-06-11 13:36:15.340798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000001a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.340831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.340919] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.340944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.341036] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.341060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.341149] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.341170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.493 [2024-06-11 13:36:15.341270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:8 cdw10:0000048f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.493 [2024-06-11 13:36:15.341295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:22.493 #47 NEW cov: 12074 ft: 14643 corp: 30/915b lim: 35 exec/s: 47 rss: 73Mb L: 35/35 MS: 1 CopyPart- 00:08:22.752 [2024-06-11 13:36:15.420879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000042f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.752 [2024-06-11 13:36:15.420913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:22.752 [2024-06-11 13:36:15.421003] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.752 [2024-06-11 13:36:15.421028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:22.752 [2024-06-11 13:36:15.421121] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.752 [2024-06-11 13:36:15.421143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:22.752 [2024-06-11 13:36:15.421239] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000048b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:22.752 [2024-06-11 13:36:15.421263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:22.752 #48 NEW cov: 12074 ft: 14653 corp: 31/947b lim: 35 exec/s: 24 rss: 73Mb L: 32/35 MS: 1 ShuffleBytes- 00:08:22.752 #48 DONE cov: 12074 ft: 14653 corp: 31/947b lim: 35 exec/s: 24 rss: 73Mb 00:08:22.752 Done 48 runs in 2 second(s) 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4416 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:22.752 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:22.753 13:36:15 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:08:22.753 [2024-06-11 13:36:15.632853] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:22.753 [2024-06-11 13:36:15.632933] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3444297 ] 00:08:23.011 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.011 [2024-06-11 13:36:15.847214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.270 [2024-06-11 13:36:15.930994] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.270 [2024-06-11 13:36:15.995175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.270 [2024-06-11 13:36:16.011561] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:08:23.270 INFO: Running with entropic power schedule (0xFF, 100). 00:08:23.270 INFO: Seed: 2998951316 00:08:23.270 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:23.270 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:23.270 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:08:23.270 INFO: A corpus is not provided, starting from an empty corpus 00:08:23.270 #2 INITED exec/s: 0 rss: 65Mb 00:08:23.270 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:23.270 This may also happen if the target rejected all inputs we tried so far 00:08:23.270 [2024-06-11 13:36:16.061294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.270 [2024-06-11 13:36:16.061335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.270 [2024-06-11 13:36:16.061408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.270 [2024-06-11 13:36:16.061436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.270 [2024-06-11 13:36:16.061520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.270 [2024-06-11 13:36:16.061548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.530 NEW_FUNC[1/687]: 0x499940 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:08:23.530 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:23.530 #4 NEW cov: 11934 ft: 11934 corp: 2/79b lim: 105 exec/s: 0 rss: 72Mb L: 78/78 MS: 2 CrossOver-InsertRepeatedBytes- 00:08:23.530 [2024-06-11 13:36:16.271961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.530 [2024-06-11 13:36:16.272006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.530 [2024-06-11 13:36:16.272073] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.530 [2024-06-11 13:36:16.272100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.530 [2024-06-11 13:36:16.272180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.530 [2024-06-11 13:36:16.272213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.530 [2024-06-11 13:36:16.272297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.530 [2024-06-11 13:36:16.272323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:23.530 #15 NEW cov: 12064 ft: 12949 corp: 3/167b lim: 105 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 InsertRepeatedBytes- 00:08:23.530 [2024-06-11 13:36:16.332027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043209519168250 len:64001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.530 [2024-06-11 13:36:16.332065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.530 [2024-06-11 13:36:16.332141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.530 [2024-06-11 13:36:16.332168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.530 [2024-06-11 13:36:16.332256] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.530 [2024-06-11 13:36:16.332285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.530 [2024-06-11 13:36:16.332369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.530 [2024-06-11 13:36:16.332397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:23.530 #16 NEW cov: 12070 ft: 13210 corp: 4/266b lim: 105 exec/s: 0 rss: 72Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:08:23.530 [2024-06-11 13:36:16.412105] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.530 [2024-06-11 13:36:16.412143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.530 [2024-06-11 13:36:16.412216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.530 [2024-06-11 13:36:16.412245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.530 [2024-06-11 13:36:16.412330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.530 [2024-06-11 13:36:16.412357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.790 #17 NEW cov: 12155 ft: 13396 corp: 5/345b lim: 105 exec/s: 0 rss: 72Mb L: 79/99 MS: 1 InsertByte- 00:08:23.790 [2024-06-11 13:36:16.492304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.790 [2024-06-11 13:36:16.492340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.790 [2024-06-11 13:36:16.492414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.790 [2024-06-11 13:36:16.492441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.790 [2024-06-11 13:36:16.492523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.790 [2024-06-11 13:36:16.492550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.790 #18 NEW cov: 12155 ft: 13522 corp: 6/424b lim: 105 exec/s: 0 rss: 72Mb L: 79/99 MS: 1 ShuffleBytes- 00:08:23.790 [2024-06-11 13:36:16.562484] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.790 [2024-06-11 13:36:16.562524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.790 [2024-06-11 13:36:16.562595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.790 [2024-06-11 13:36:16.562622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.790 [2024-06-11 13:36:16.562706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.790 [2024-06-11 13:36:16.562734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.791 #19 NEW cov: 12155 ft: 13660 corp: 7/503b lim: 105 exec/s: 0 rss: 72Mb L: 79/99 MS: 1 CopyPart- 00:08:23.791 [2024-06-11 13:36:16.612638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.791 [2024-06-11 13:36:16.612675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.791 [2024-06-11 13:36:16.612748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.791 [2024-06-11 13:36:16.612775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.791 [2024-06-11 13:36:16.612857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.791 [2024-06-11 13:36:16.612885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.791 #20 NEW cov: 12155 ft: 13750 corp: 8/582b lim: 105 exec/s: 0 rss: 72Mb L: 79/99 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:23.791 [2024-06-11 13:36:16.692957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043209519168250 len:64001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.791 [2024-06-11 13:36:16.692994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:23.791 [2024-06-11 13:36:16.693067] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.791 [2024-06-11 13:36:16.693098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:23.791 [2024-06-11 13:36:16.693180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.791 [2024-06-11 13:36:16.693213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:23.791 [2024-06-11 13:36:16.693295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:23.791 [2024-06-11 13:36:16.693321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.049 #21 NEW cov: 12155 ft: 13800 corp: 9/681b lim: 105 exec/s: 0 rss: 72Mb L: 99/99 MS: 1 ChangeBinInt- 00:08:24.049 [2024-06-11 13:36:16.773045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.049 [2024-06-11 13:36:16.773083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.049 [2024-06-11 13:36:16.773157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.049 [2024-06-11 13:36:16.773184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.049 [2024-06-11 13:36:16.773275] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.049 [2024-06-11 13:36:16.773302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.049 #22 NEW cov: 12155 ft: 13825 corp: 10/760b lim: 105 exec/s: 0 rss: 72Mb L: 79/99 MS: 1 ChangeByte- 00:08:24.049 [2024-06-11 13:36:16.843218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.049 [2024-06-11 13:36:16.843254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.049 [2024-06-11 13:36:16.843333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.049 [2024-06-11 13:36:16.843361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.049 [2024-06-11 13:36:16.843444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.050 [2024-06-11 13:36:16.843471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.050 #23 NEW cov: 12155 ft: 13891 corp: 11/839b lim: 105 exec/s: 0 rss: 72Mb L: 79/99 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:24.050 [2024-06-11 13:36:16.893410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.050 [2024-06-11 13:36:16.893447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.050 [2024-06-11 13:36:16.893517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744069532024831 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.050 [2024-06-11 13:36:16.893544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.050 [2024-06-11 13:36:16.893628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.050 [2024-06-11 13:36:16.893659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.050 #24 NEW cov: 12155 ft: 13913 corp: 12/918b lim: 105 exec/s: 0 rss: 72Mb L: 79/99 MS: 1 ChangeByte- 00:08:24.050 [2024-06-11 13:36:16.943711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043209519168250 len:64001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.050 [2024-06-11 13:36:16.943747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.050 [2024-06-11 13:36:16.943826] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.050 [2024-06-11 13:36:16.943854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.050 [2024-06-11 13:36:16.943937] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.050 [2024-06-11 13:36:16.943963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.050 [2024-06-11 13:36:16.944050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.050 [2024-06-11 13:36:16.944076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.309 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:24.309 #25 NEW cov: 12178 ft: 13957 corp: 13/1022b lim: 105 exec/s: 0 rss: 72Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:08:24.309 [2024-06-11 13:36:17.023811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.309 [2024-06-11 13:36:17.023847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.309 [2024-06-11 13:36:17.023920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.309 [2024-06-11 13:36:17.023947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.309 [2024-06-11 13:36:17.024032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.309 [2024-06-11 13:36:17.024060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.309 #26 NEW cov: 12178 ft: 14074 corp: 14/1101b lim: 105 exec/s: 26 rss: 72Mb L: 79/104 MS: 1 ChangeBit- 00:08:24.309 [2024-06-11 13:36:17.104040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.309 [2024-06-11 13:36:17.104077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.309 [2024-06-11 13:36:17.104152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.309 [2024-06-11 13:36:17.104180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.309 [2024-06-11 13:36:17.104271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.309 [2024-06-11 13:36:17.104301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.309 #27 NEW cov: 12178 ft: 14110 corp: 15/1180b lim: 105 exec/s: 27 rss: 72Mb L: 79/104 MS: 1 ChangeBinInt- 00:08:24.309 [2024-06-11 13:36:17.154168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.309 [2024-06-11 13:36:17.154211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.309 [2024-06-11 13:36:17.154284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.309 [2024-06-11 13:36:17.154310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.309 [2024-06-11 13:36:17.154394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.309 [2024-06-11 13:36:17.154420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.309 #28 NEW cov: 12178 ft: 14117 corp: 16/1260b lim: 105 exec/s: 28 rss: 73Mb L: 80/104 MS: 1 InsertByte- 00:08:24.569 [2024-06-11 13:36:17.234423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.234459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.569 [2024-06-11 13:36:17.234532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.234560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.569 [2024-06-11 13:36:17.234645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.234672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.569 #29 NEW cov: 12178 ft: 14143 corp: 17/1339b lim: 105 exec/s: 29 rss: 73Mb L: 79/104 MS: 1 InsertByte- 00:08:24.569 [2024-06-11 13:36:17.284697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.284733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.569 [2024-06-11 13:36:17.284813] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.284841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.569 [2024-06-11 13:36:17.284922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.284949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.569 [2024-06-11 13:36:17.285030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446708889337462783 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.285055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.569 #30 NEW cov: 12178 ft: 14186 corp: 18/1426b lim: 105 exec/s: 30 rss: 73Mb L: 87/104 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:24.569 [2024-06-11 13:36:17.344676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.344719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.569 [2024-06-11 13:36:17.344794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.344821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.569 [2024-06-11 13:36:17.344903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.344930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.569 #31 NEW cov: 12178 ft: 14187 corp: 19/1505b lim: 105 exec/s: 31 rss: 73Mb L: 79/104 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:24.569 [2024-06-11 13:36:17.415099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.415135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.569 [2024-06-11 13:36:17.415220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.415248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.569 [2024-06-11 13:36:17.415333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.415360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.569 [2024-06-11 13:36:17.415445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.569 [2024-06-11 13:36:17.415470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.569 #32 NEW cov: 12178 ft: 14206 corp: 20/1600b lim: 105 exec/s: 32 rss: 73Mb L: 95/104 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:24.828 [2024-06-11 13:36:17.495272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.828 [2024-06-11 13:36:17.495308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.828 [2024-06-11 13:36:17.495388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:1792 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.495416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.829 [2024-06-11 13:36:17.495497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.495524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.829 [2024-06-11 13:36:17.495607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.495630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.829 #33 NEW cov: 12178 ft: 14224 corp: 21/1689b lim: 105 exec/s: 33 rss: 73Mb L: 89/104 MS: 1 CopyPart- 00:08:24.829 [2024-06-11 13:36:17.575547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.575583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.829 [2024-06-11 13:36:17.575659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3617008645356650290 len:12851 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.575686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.829 [2024-06-11 13:36:17.575767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:3617009522372129330 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.575794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.829 [2024-06-11 13:36:17.575878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.575904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.829 #34 NEW cov: 12178 ft: 14235 corp: 22/1793b lim: 105 exec/s: 34 rss: 73Mb L: 104/104 MS: 1 InsertRepeatedBytes- 00:08:24.829 [2024-06-11 13:36:17.625677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.625712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.829 [2024-06-11 13:36:17.625791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.625819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.829 [2024-06-11 13:36:17.625902] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.625931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.829 [2024-06-11 13:36:17.626016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.626040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:24.829 #35 NEW cov: 12178 ft: 14242 corp: 23/1889b lim: 105 exec/s: 35 rss: 73Mb L: 96/104 MS: 1 CrossOver- 00:08:24.829 [2024-06-11 13:36:17.705955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.705990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:24.829 [2024-06-11 13:36:17.706070] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.706097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:24.829 [2024-06-11 13:36:17.706180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.706212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:24.829 [2024-06-11 13:36:17.706301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:24.829 [2024-06-11 13:36:17.706329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.088 #36 NEW cov: 12178 ft: 14252 corp: 24/1985b lim: 105 exec/s: 36 rss: 73Mb L: 96/104 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:25.088 [2024-06-11 13:36:17.786019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.088 [2024-06-11 13:36:17.786055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.088 [2024-06-11 13:36:17.786125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446743339270143999 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.088 [2024-06-11 13:36:17.786152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.088 [2024-06-11 13:36:17.786241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.088 [2024-06-11 13:36:17.786269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.088 #37 NEW cov: 12178 ft: 14318 corp: 25/2065b lim: 105 exec/s: 37 rss: 74Mb L: 80/104 MS: 1 InsertByte- 00:08:25.088 [2024-06-11 13:36:17.856357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.088 [2024-06-11 13:36:17.856394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.088 [2024-06-11 13:36:17.856471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.088 [2024-06-11 13:36:17.856498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.088 [2024-06-11 13:36:17.856580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709499647 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.088 [2024-06-11 13:36:17.856608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.088 [2024-06-11 13:36:17.856694] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.088 [2024-06-11 13:36:17.856720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.088 #38 NEW cov: 12178 ft: 14319 corp: 26/2162b lim: 105 exec/s: 38 rss: 74Mb L: 97/104 MS: 1 InsertByte- 00:08:25.088 [2024-06-11 13:36:17.936360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744071663452159 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.088 [2024-06-11 13:36:17.936399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.088 [2024-06-11 13:36:17.936471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.088 [2024-06-11 13:36:17.936498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.088 [2024-06-11 13:36:17.936583] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.088 [2024-06-11 13:36:17.936610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.088 #39 NEW cov: 12178 ft: 14327 corp: 27/2242b lim: 105 exec/s: 39 rss: 74Mb L: 80/104 MS: 1 InsertByte- 00:08:25.348 [2024-06-11 13:36:18.016743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18085043209519168250 len:64001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.348 [2024-06-11 13:36:18.016779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.348 [2024-06-11 13:36:18.016860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.348 [2024-06-11 13:36:18.016888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:25.348 [2024-06-11 13:36:18.016971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.348 [2024-06-11 13:36:18.016997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:25.348 [2024-06-11 13:36:18.017079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:25.348 [2024-06-11 13:36:18.017105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:25.348 #45 NEW cov: 12178 ft: 14345 corp: 28/2342b lim: 105 exec/s: 22 rss: 74Mb L: 100/104 MS: 1 CopyPart- 00:08:25.348 #45 DONE cov: 12178 ft: 14345 corp: 28/2342b lim: 105 exec/s: 22 rss: 74Mb 00:08:25.348 ###### Recommended dictionary. ###### 00:08:25.348 "\377\377\377\377\377\377\377\377" # Uses: 5 00:08:25.348 ###### End of recommended dictionary. ###### 00:08:25.348 Done 45 runs in 2 second(s) 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4417 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:25.348 13:36:18 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:08:25.348 [2024-06-11 13:36:18.236301] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:25.348 [2024-06-11 13:36:18.236377] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3444715 ] 00:08:25.607 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.607 [2024-06-11 13:36:18.448180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.866 [2024-06-11 13:36:18.531944] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.866 [2024-06-11 13:36:18.595965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.866 [2024-06-11 13:36:18.612336] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:08:25.866 INFO: Running with entropic power schedule (0xFF, 100). 00:08:25.866 INFO: Seed: 1305928381 00:08:25.866 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:25.866 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:25.866 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:08:25.866 INFO: A corpus is not provided, starting from an empty corpus 00:08:25.866 #2 INITED exec/s: 0 rss: 65Mb 00:08:25.866 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:25.866 This may also happen if the target rejected all inputs we tried so far 00:08:25.866 [2024-06-11 13:36:18.667349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.866 [2024-06-11 13:36:18.667394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:25.866 [2024-06-11 13:36:18.667445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:25.866 [2024-06-11 13:36:18.667470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.125 NEW_FUNC[1/688]: 0x49ccc0 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:08:26.125 NEW_FUNC[2/688]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:26.125 #35 NEW cov: 11955 ft: 11953 corp: 2/51b lim: 120 exec/s: 0 rss: 72Mb L: 50/50 MS: 3 ChangeByte-InsertByte-InsertRepeatedBytes- 00:08:26.125 [2024-06-11 13:36:18.927960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.125 [2024-06-11 13:36:18.928013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.125 [2024-06-11 13:36:18.928065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.125 [2024-06-11 13:36:18.928090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.125 #38 NEW cov: 12085 ft: 12557 corp: 3/113b lim: 120 exec/s: 0 rss: 72Mb L: 62/62 MS: 3 CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:08:26.125 [2024-06-11 13:36:19.018003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.125 [2024-06-11 13:36:19.018044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.382 #39 NEW cov: 12091 ft: 13608 corp: 4/155b lim: 120 exec/s: 0 rss: 72Mb L: 42/62 MS: 1 EraseBytes- 00:08:26.382 [2024-06-11 13:36:19.148415] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.382 [2024-06-11 13:36:19.148458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.382 [2024-06-11 13:36:19.148512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.383 [2024-06-11 13:36:19.148538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.383 #40 NEW cov: 12176 ft: 13828 corp: 5/205b lim: 120 exec/s: 0 rss: 72Mb L: 50/62 MS: 1 ChangeBinInt- 00:08:26.383 [2024-06-11 13:36:19.248727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.383 [2024-06-11 13:36:19.248770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.383 [2024-06-11 13:36:19.248823] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.383 [2024-06-11 13:36:19.248850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.641 #41 NEW cov: 12176 ft: 13880 corp: 6/267b lim: 120 exec/s: 0 rss: 72Mb L: 62/62 MS: 1 ShuffleBytes- 00:08:26.641 [2024-06-11 13:36:19.379106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.641 [2024-06-11 13:36:19.379149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.641 [2024-06-11 13:36:19.379210] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.641 [2024-06-11 13:36:19.379235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.641 #42 NEW cov: 12176 ft: 14009 corp: 7/330b lim: 120 exec/s: 0 rss: 72Mb L: 63/63 MS: 1 InsertByte- 00:08:26.641 [2024-06-11 13:36:19.469242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.641 [2024-06-11 13:36:19.469286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.900 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:26.900 #43 NEW cov: 12193 ft: 14224 corp: 8/373b lim: 120 exec/s: 0 rss: 72Mb L: 43/63 MS: 1 EraseBytes- 00:08:26.900 [2024-06-11 13:36:19.609697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.900 [2024-06-11 13:36:19.609740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:26.900 [2024-06-11 13:36:19.609789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.900 [2024-06-11 13:36:19.609814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:26.900 #49 NEW cov: 12193 ft: 14238 corp: 9/423b lim: 120 exec/s: 49 rss: 72Mb L: 50/63 MS: 1 ChangeBit- 00:08:26.900 [2024-06-11 13:36:19.739957] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:26.900 [2024-06-11 13:36:19.739999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.159 #50 NEW cov: 12193 ft: 14277 corp: 10/465b lim: 120 exec/s: 50 rss: 72Mb L: 42/63 MS: 1 ShuffleBytes- 00:08:27.159 [2024-06-11 13:36:19.870341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.159 [2024-06-11 13:36:19.870382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.159 #51 NEW cov: 12193 ft: 14338 corp: 11/508b lim: 120 exec/s: 51 rss: 72Mb L: 43/63 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:27.159 [2024-06-11 13:36:19.991356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.159 [2024-06-11 13:36:19.991394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.159 #52 NEW cov: 12193 ft: 14379 corp: 12/539b lim: 120 exec/s: 52 rss: 72Mb L: 31/63 MS: 1 EraseBytes- 00:08:27.159 [2024-06-11 13:36:20.041525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.159 [2024-06-11 13:36:20.041563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.419 #53 NEW cov: 12193 ft: 14451 corp: 13/581b lim: 120 exec/s: 53 rss: 72Mb L: 42/63 MS: 1 ShuffleBytes- 00:08:27.419 [2024-06-11 13:36:20.121775] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-06-11 13:36:20.121816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.419 #54 NEW cov: 12193 ft: 14460 corp: 14/625b lim: 120 exec/s: 54 rss: 72Mb L: 44/63 MS: 1 CopyPart- 00:08:27.419 [2024-06-11 13:36:20.172063] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-06-11 13:36:20.172098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.419 [2024-06-11 13:36:20.172166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-06-11 13:36:20.172193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.419 #55 NEW cov: 12193 ft: 14511 corp: 15/692b lim: 120 exec/s: 55 rss: 72Mb L: 67/67 MS: 1 CopyPart- 00:08:27.419 [2024-06-11 13:36:20.232064] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-06-11 13:36:20.232100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.419 #56 NEW cov: 12193 ft: 14568 corp: 16/734b lim: 120 exec/s: 56 rss: 72Mb L: 42/67 MS: 1 ChangeBinInt- 00:08:27.419 [2024-06-11 13:36:20.282630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-06-11 13:36:20.282666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.419 [2024-06-11 13:36:20.282735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-06-11 13:36:20.282761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.419 [2024-06-11 13:36:20.282840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.419 [2024-06-11 13:36:20.282866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:27.678 #57 NEW cov: 12193 ft: 14988 corp: 17/809b lim: 120 exec/s: 57 rss: 72Mb L: 75/75 MS: 1 CMP- DE: "\000\003\345\330\301\345^\022"- 00:08:27.678 [2024-06-11 13:36:20.362682] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2260559756489858847 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.678 [2024-06-11 13:36:20.362722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.678 [2024-06-11 13:36:20.362811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.678 [2024-06-11 13:36:20.362839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.678 #58 NEW cov: 12193 ft: 14993 corp: 18/871b lim: 120 exec/s: 58 rss: 72Mb L: 62/75 MS: 1 ChangeBit- 00:08:27.678 [2024-06-11 13:36:20.432894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.678 [2024-06-11 13:36:20.432929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.678 [2024-06-11 13:36:20.433002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.678 [2024-06-11 13:36:20.433029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.678 #59 NEW cov: 12193 ft: 15010 corp: 19/925b lim: 120 exec/s: 59 rss: 72Mb L: 54/75 MS: 1 InsertRepeatedBytes- 00:08:27.679 [2024-06-11 13:36:20.503047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-06-11 13:36:20.503083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.679 [2024-06-11 13:36:20.503157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-06-11 13:36:20.503186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.679 #60 NEW cov: 12200 ft: 15020 corp: 20/992b lim: 120 exec/s: 60 rss: 72Mb L: 67/75 MS: 1 CopyPart- 00:08:27.679 [2024-06-11 13:36:20.553175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-06-11 13:36:20.553216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.679 [2024-06-11 13:36:20.553288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.679 [2024-06-11 13:36:20.553315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.679 #61 NEW cov: 12200 ft: 15051 corp: 21/1043b lim: 120 exec/s: 61 rss: 72Mb L: 51/75 MS: 1 InsertByte- 00:08:27.938 [2024-06-11 13:36:20.603372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.938 [2024-06-11 13:36:20.603409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.938 [2024-06-11 13:36:20.603482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:576460752825548800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.938 [2024-06-11 13:36:20.603511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:27.938 #62 NEW cov: 12200 ft: 15118 corp: 22/1110b lim: 120 exec/s: 62 rss: 72Mb L: 67/75 MS: 1 CrossOver- 00:08:27.938 [2024-06-11 13:36:20.653311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2242545357980376863 len:7968 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:27.938 [2024-06-11 13:36:20.653348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:27.938 #63 NEW cov: 12200 ft: 15157 corp: 23/1142b lim: 120 exec/s: 31 rss: 73Mb L: 32/75 MS: 1 InsertByte- 00:08:27.938 #63 DONE cov: 12200 ft: 15157 corp: 23/1142b lim: 120 exec/s: 31 rss: 73Mb 00:08:27.938 ###### Recommended dictionary. ###### 00:08:27.938 "\377\377\377\377\377\377\377\377" # Uses: 0 00:08:27.938 "\000\003\345\330\301\345^\022" # Uses: 0 00:08:27.938 ###### End of recommended dictionary. ###### 00:08:27.938 Done 63 runs in 2 second(s) 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4418 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:28.197 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:28.198 13:36:20 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:08:28.198 [2024-06-11 13:36:20.906659] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:28.198 [2024-06-11 13:36:20.906736] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3445124 ] 00:08:28.198 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.457 [2024-06-11 13:36:21.117637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.457 [2024-06-11 13:36:21.202095] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.457 [2024-06-11 13:36:21.266090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.457 [2024-06-11 13:36:21.282473] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:08:28.457 INFO: Running with entropic power schedule (0xFF, 100). 00:08:28.457 INFO: Seed: 3973927731 00:08:28.457 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:28.457 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:28.457 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:08:28.457 INFO: A corpus is not provided, starting from an empty corpus 00:08:28.457 #2 INITED exec/s: 0 rss: 65Mb 00:08:28.457 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:28.457 This may also happen if the target rejected all inputs we tried so far 00:08:28.457 [2024-06-11 13:36:21.330992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.457 [2024-06-11 13:36:21.331032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.716 NEW_FUNC[1/686]: 0x4a05b0 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:08:28.716 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:28.716 #36 NEW cov: 11898 ft: 11894 corp: 2/23b lim: 100 exec/s: 0 rss: 72Mb L: 22/22 MS: 4 CMP-ChangeBit-InsertByte-InsertRepeatedBytes- DE: "\377\377\377\377\377\377\377\377"- 00:08:28.716 [2024-06-11 13:36:21.541512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.716 [2024-06-11 13:36:21.541562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.716 #37 NEW cov: 12028 ft: 12551 corp: 3/45b lim: 100 exec/s: 0 rss: 72Mb L: 22/22 MS: 1 ChangeBinInt- 00:08:28.716 [2024-06-11 13:36:21.622071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.716 [2024-06-11 13:36:21.622110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.716 [2024-06-11 13:36:21.622177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:28.716 [2024-06-11 13:36:21.622207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.716 [2024-06-11 13:36:21.622284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:28.716 [2024-06-11 13:36:21.622310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.716 [2024-06-11 13:36:21.622390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:28.716 [2024-06-11 13:36:21.622415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.975 #61 NEW cov: 12034 ft: 13129 corp: 4/133b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 4 ChangeBit-ChangeBit-InsertByte-InsertRepeatedBytes- 00:08:28.975 [2024-06-11 13:36:21.682269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.975 [2024-06-11 13:36:21.682305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.975 [2024-06-11 13:36:21.682374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:28.975 [2024-06-11 13:36:21.682399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.975 [2024-06-11 13:36:21.682474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:28.975 [2024-06-11 13:36:21.682500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.975 [2024-06-11 13:36:21.682577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:28.975 [2024-06-11 13:36:21.682602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.975 #62 NEW cov: 12119 ft: 13386 corp: 5/221b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 ChangeBinInt- 00:08:28.975 [2024-06-11 13:36:21.762472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.975 [2024-06-11 13:36:21.762507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.975 [2024-06-11 13:36:21.762577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:28.975 [2024-06-11 13:36:21.762603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.975 [2024-06-11 13:36:21.762687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:28.975 [2024-06-11 13:36:21.762711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.975 [2024-06-11 13:36:21.762790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:28.975 [2024-06-11 13:36:21.762816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.975 #63 NEW cov: 12119 ft: 13442 corp: 6/309b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\001"- 00:08:28.975 [2024-06-11 13:36:21.842713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:28.975 [2024-06-11 13:36:21.842748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:28.975 [2024-06-11 13:36:21.842820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:28.975 [2024-06-11 13:36:21.842845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:28.975 [2024-06-11 13:36:21.842919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:28.975 [2024-06-11 13:36:21.842944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:28.975 [2024-06-11 13:36:21.843024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:28.975 [2024-06-11 13:36:21.843049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:28.975 #64 NEW cov: 12119 ft: 13486 corp: 7/397b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 ChangeByte- 00:08:29.234 [2024-06-11 13:36:21.892837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.234 [2024-06-11 13:36:21.892872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.234 [2024-06-11 13:36:21.892941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.234 [2024-06-11 13:36:21.892967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.234 [2024-06-11 13:36:21.893043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.234 [2024-06-11 13:36:21.893068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.234 [2024-06-11 13:36:21.893146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:29.234 [2024-06-11 13:36:21.893171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.234 #65 NEW cov: 12119 ft: 13516 corp: 8/485b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 ChangeByte- 00:08:29.234 [2024-06-11 13:36:21.972623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.234 [2024-06-11 13:36:21.972661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.234 #66 NEW cov: 12119 ft: 13613 corp: 9/507b lim: 100 exec/s: 0 rss: 72Mb L: 22/88 MS: 1 CMP- DE: "\037\000\000\000"- 00:08:29.234 [2024-06-11 13:36:22.042853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.234 [2024-06-11 13:36:22.042890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.234 #67 NEW cov: 12119 ft: 13638 corp: 10/529b lim: 100 exec/s: 0 rss: 72Mb L: 22/88 MS: 1 ShuffleBytes- 00:08:29.234 [2024-06-11 13:36:22.093258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.234 [2024-06-11 13:36:22.093298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.234 [2024-06-11 13:36:22.093363] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.234 [2024-06-11 13:36:22.093389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.234 [2024-06-11 13:36:22.093466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.234 [2024-06-11 13:36:22.093493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.234 #71 NEW cov: 12119 ft: 13909 corp: 11/594b lim: 100 exec/s: 0 rss: 72Mb L: 65/88 MS: 4 ChangeByte-CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:08:29.234 [2024-06-11 13:36:22.143564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.234 [2024-06-11 13:36:22.143600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.234 [2024-06-11 13:36:22.143671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.235 [2024-06-11 13:36:22.143697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.235 [2024-06-11 13:36:22.143774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.235 [2024-06-11 13:36:22.143799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.235 [2024-06-11 13:36:22.143877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:29.235 [2024-06-11 13:36:22.143903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.492 #72 NEW cov: 12119 ft: 13930 corp: 12/682b lim: 100 exec/s: 0 rss: 72Mb L: 88/88 MS: 1 CopyPart- 00:08:29.492 [2024-06-11 13:36:22.193205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.492 [2024-06-11 13:36:22.193241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.492 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:29.492 #73 NEW cov: 12142 ft: 13977 corp: 13/705b lim: 100 exec/s: 0 rss: 72Mb L: 23/88 MS: 1 InsertByte- 00:08:29.492 [2024-06-11 13:36:22.243369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.492 [2024-06-11 13:36:22.243405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.492 #74 NEW cov: 12142 ft: 14044 corp: 14/728b lim: 100 exec/s: 0 rss: 72Mb L: 23/88 MS: 1 CopyPart- 00:08:29.492 [2024-06-11 13:36:22.324053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.492 [2024-06-11 13:36:22.324088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.492 [2024-06-11 13:36:22.324160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.492 [2024-06-11 13:36:22.324185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.492 [2024-06-11 13:36:22.324271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.492 [2024-06-11 13:36:22.324297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.492 [2024-06-11 13:36:22.324375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:29.492 [2024-06-11 13:36:22.324405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.492 #75 NEW cov: 12142 ft: 14067 corp: 15/816b lim: 100 exec/s: 75 rss: 72Mb L: 88/88 MS: 1 ShuffleBytes- 00:08:29.492 [2024-06-11 13:36:22.404287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.492 [2024-06-11 13:36:22.404322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.492 [2024-06-11 13:36:22.404393] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.492 [2024-06-11 13:36:22.404420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.492 [2024-06-11 13:36:22.404496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.492 [2024-06-11 13:36:22.404521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.751 [2024-06-11 13:36:22.404599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:29.751 [2024-06-11 13:36:22.404624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:29.751 #76 NEW cov: 12142 ft: 14094 corp: 16/904b lim: 100 exec/s: 76 rss: 72Mb L: 88/88 MS: 1 ChangeByte- 00:08:29.751 [2024-06-11 13:36:22.484059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.751 [2024-06-11 13:36:22.484094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.751 #77 NEW cov: 12142 ft: 14120 corp: 17/924b lim: 100 exec/s: 77 rss: 73Mb L: 20/88 MS: 1 EraseBytes- 00:08:29.751 [2024-06-11 13:36:22.554553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.751 [2024-06-11 13:36:22.554590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.751 [2024-06-11 13:36:22.554656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.751 [2024-06-11 13:36:22.554681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.751 [2024-06-11 13:36:22.554758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.751 [2024-06-11 13:36:22.554783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.751 #78 NEW cov: 12142 ft: 14151 corp: 18/989b lim: 100 exec/s: 78 rss: 73Mb L: 65/88 MS: 1 ChangeBinInt- 00:08:29.751 [2024-06-11 13:36:22.634940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:29.751 [2024-06-11 13:36:22.634976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:29.751 [2024-06-11 13:36:22.635049] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:29.751 [2024-06-11 13:36:22.635074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:29.751 [2024-06-11 13:36:22.635151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:29.751 [2024-06-11 13:36:22.635177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:29.751 [2024-06-11 13:36:22.635261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:29.751 [2024-06-11 13:36:22.635287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.010 #79 NEW cov: 12142 ft: 14165 corp: 19/1077b lim: 100 exec/s: 79 rss: 73Mb L: 88/88 MS: 1 ShuffleBytes- 00:08:30.010 [2024-06-11 13:36:22.685070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.010 [2024-06-11 13:36:22.685105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.010 [2024-06-11 13:36:22.685176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.010 [2024-06-11 13:36:22.685207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.010 [2024-06-11 13:36:22.685283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.010 [2024-06-11 13:36:22.685308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.010 [2024-06-11 13:36:22.685384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:30.010 [2024-06-11 13:36:22.685409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.010 #80 NEW cov: 12142 ft: 14181 corp: 20/1165b lim: 100 exec/s: 80 rss: 73Mb L: 88/88 MS: 1 PersAutoDict- DE: "\037\000\000\000"- 00:08:30.010 [2024-06-11 13:36:22.735231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.011 [2024-06-11 13:36:22.735266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.011 [2024-06-11 13:36:22.735341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.011 [2024-06-11 13:36:22.735366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.011 [2024-06-11 13:36:22.735443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.011 [2024-06-11 13:36:22.735468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.011 [2024-06-11 13:36:22.735545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:30.011 [2024-06-11 13:36:22.735570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.011 #81 NEW cov: 12142 ft: 14185 corp: 21/1252b lim: 100 exec/s: 81 rss: 73Mb L: 87/88 MS: 1 EraseBytes- 00:08:30.011 [2024-06-11 13:36:22.784939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.011 [2024-06-11 13:36:22.784974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.011 #82 NEW cov: 12142 ft: 14197 corp: 22/1273b lim: 100 exec/s: 82 rss: 73Mb L: 21/88 MS: 1 EraseBytes- 00:08:30.011 [2024-06-11 13:36:22.835552] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.011 [2024-06-11 13:36:22.835588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.011 [2024-06-11 13:36:22.835657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.011 [2024-06-11 13:36:22.835681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.011 [2024-06-11 13:36:22.835757] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.011 [2024-06-11 13:36:22.835781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.011 [2024-06-11 13:36:22.835858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:30.011 [2024-06-11 13:36:22.835883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.011 #83 NEW cov: 12142 ft: 14282 corp: 23/1361b lim: 100 exec/s: 83 rss: 73Mb L: 88/88 MS: 1 ShuffleBytes- 00:08:30.011 [2024-06-11 13:36:22.915382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.011 [2024-06-11 13:36:22.915417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.270 #84 NEW cov: 12142 ft: 14296 corp: 24/1390b lim: 100 exec/s: 84 rss: 73Mb L: 29/88 MS: 1 CMP- DE: "\000\000\000\000\000\000\004\000"- 00:08:30.270 [2024-06-11 13:36:22.985994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.270 [2024-06-11 13:36:22.986028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.270 [2024-06-11 13:36:22.986100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.270 [2024-06-11 13:36:22.986126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.270 [2024-06-11 13:36:22.986208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.270 [2024-06-11 13:36:22.986241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.270 [2024-06-11 13:36:22.986318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:30.270 [2024-06-11 13:36:22.986344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.270 #85 NEW cov: 12142 ft: 14368 corp: 25/1479b lim: 100 exec/s: 85 rss: 73Mb L: 89/89 MS: 1 InsertByte- 00:08:30.270 [2024-06-11 13:36:23.066231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.270 [2024-06-11 13:36:23.066269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.270 [2024-06-11 13:36:23.066338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.270 [2024-06-11 13:36:23.066363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.270 [2024-06-11 13:36:23.066443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.270 [2024-06-11 13:36:23.066468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.270 [2024-06-11 13:36:23.066547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:30.270 [2024-06-11 13:36:23.066573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.270 #86 NEW cov: 12142 ft: 14374 corp: 26/1568b lim: 100 exec/s: 86 rss: 73Mb L: 89/89 MS: 1 InsertByte- 00:08:30.270 [2024-06-11 13:36:23.116309] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.270 [2024-06-11 13:36:23.116343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.270 [2024-06-11 13:36:23.116418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:08:30.270 [2024-06-11 13:36:23.116443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:30.270 [2024-06-11 13:36:23.116521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:08:30.270 [2024-06-11 13:36:23.116547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:30.270 [2024-06-11 13:36:23.116626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:08:30.270 [2024-06-11 13:36:23.116652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:30.270 #87 NEW cov: 12142 ft: 14416 corp: 27/1667b lim: 100 exec/s: 87 rss: 73Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:08:30.564 [2024-06-11 13:36:23.196145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.564 [2024-06-11 13:36:23.196178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.564 #88 NEW cov: 12142 ft: 14428 corp: 28/1687b lim: 100 exec/s: 88 rss: 73Mb L: 20/99 MS: 1 EraseBytes- 00:08:30.564 [2024-06-11 13:36:23.266293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:08:30.564 [2024-06-11 13:36:23.266328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:30.564 #89 NEW cov: 12142 ft: 14459 corp: 29/1714b lim: 100 exec/s: 44 rss: 74Mb L: 27/99 MS: 1 CrossOver- 00:08:30.564 #89 DONE cov: 12142 ft: 14459 corp: 29/1714b lim: 100 exec/s: 44 rss: 74Mb 00:08:30.564 ###### Recommended dictionary. ###### 00:08:30.564 "\377\377\377\377\377\377\377\377" # Uses: 1 00:08:30.564 "\000\000\000\000\000\000\000\001" # Uses: 0 00:08:30.564 "\037\000\000\000" # Uses: 1 00:08:30.564 "\000\000\000\000\000\000\004\000" # Uses: 0 00:08:30.564 ###### End of recommended dictionary. ###### 00:08:30.564 Done 89 runs in 2 second(s) 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4419 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:30.877 13:36:23 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:08:30.877 [2024-06-11 13:36:23.514861] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:30.877 [2024-06-11 13:36:23.514927] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3445479 ] 00:08:30.877 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.877 [2024-06-11 13:36:23.730926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.136 [2024-06-11 13:36:23.815465] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.136 [2024-06-11 13:36:23.879340] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.136 [2024-06-11 13:36:23.895714] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:08:31.136 INFO: Running with entropic power schedule (0xFF, 100). 00:08:31.136 INFO: Seed: 2293976063 00:08:31.136 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:31.136 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:31.136 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:08:31.136 INFO: A corpus is not provided, starting from an empty corpus 00:08:31.136 #2 INITED exec/s: 0 rss: 64Mb 00:08:31.136 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:31.136 This may also happen if the target rejected all inputs we tried so far 00:08:31.136 [2024-06-11 13:36:23.941378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.136 [2024-06-11 13:36:23.941406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.136 [2024-06-11 13:36:23.941457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:31.136 [2024-06-11 13:36:23.941471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.136 [2024-06-11 13:36:23.941490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:31.136 [2024-06-11 13:36:23.941503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.136 [2024-06-11 13:36:23.941558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.136 [2024-06-11 13:36:23.941571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.395 NEW_FUNC[1/686]: 0x4a3570 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:08:31.395 NEW_FUNC[2/686]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:31.395 #40 NEW cov: 11876 ft: 11876 corp: 2/48b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:31.395 [2024-06-11 13:36:24.131809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.395 [2024-06-11 13:36:24.131844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.395 [2024-06-11 13:36:24.131895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:31.395 [2024-06-11 13:36:24.131906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.395 [2024-06-11 13:36:24.131954] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:31.395 [2024-06-11 13:36:24.131968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.396 [2024-06-11 13:36:24.132019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.396 [2024-06-11 13:36:24.132032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.396 #41 NEW cov: 12006 ft: 12468 corp: 3/95b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 ChangeByte- 00:08:31.396 [2024-06-11 13:36:24.181856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.396 [2024-06-11 13:36:24.181882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.396 [2024-06-11 13:36:24.181930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:31.396 [2024-06-11 13:36:24.181943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.396 [2024-06-11 13:36:24.181971] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:31.396 [2024-06-11 13:36:24.181984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.396 [2024-06-11 13:36:24.182036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.396 [2024-06-11 13:36:24.182049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.396 #42 NEW cov: 12012 ft: 12686 corp: 4/142b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 ChangeBit- 00:08:31.396 [2024-06-11 13:36:24.231991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.396 [2024-06-11 13:36:24.232017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.396 [2024-06-11 13:36:24.232068] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:31.396 [2024-06-11 13:36:24.232081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.396 [2024-06-11 13:36:24.232104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:31.396 [2024-06-11 13:36:24.232117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.396 [2024-06-11 13:36:24.232168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.396 [2024-06-11 13:36:24.232180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.396 #43 NEW cov: 12097 ft: 13001 corp: 5/189b lim: 50 exec/s: 0 rss: 71Mb L: 47/47 MS: 1 ChangeBit- 00:08:31.396 [2024-06-11 13:36:24.272103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.396 [2024-06-11 13:36:24.272127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.396 [2024-06-11 13:36:24.272178] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598316183552 len:1792 00:08:31.396 [2024-06-11 13:36:24.272192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.396 [2024-06-11 13:36:24.272228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:31.396 [2024-06-11 13:36:24.272242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.396 [2024-06-11 13:36:24.272292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.396 [2024-06-11 13:36:24.272305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.396 #44 NEW cov: 12097 ft: 13051 corp: 6/236b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 ChangeBinInt- 00:08:31.656 [2024-06-11 13:36:24.322283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.322308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.322356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.322370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.322399] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65328 00:08:31.656 [2024-06-11 13:36:24.322412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.322461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:1095216660480 len:65536 00:08:31.656 [2024-06-11 13:36:24.322473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.656 #45 NEW cov: 12097 ft: 13098 corp: 7/283b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 ChangeBinInt- 00:08:31.656 [2024-06-11 13:36:24.362386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.362410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.362460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.362473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.362505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:31.656 [2024-06-11 13:36:24.362518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.362569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.362581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.656 #46 NEW cov: 12097 ft: 13135 corp: 8/330b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 ShuffleBytes- 00:08:31.656 [2024-06-11 13:36:24.412524] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.412549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.412601] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.412615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.412646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:31.656 [2024-06-11 13:36:24.412659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.412711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073675997183 len:65536 00:08:31.656 [2024-06-11 13:36:24.412725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.656 #47 NEW cov: 12097 ft: 13170 corp: 9/377b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 CopyPart- 00:08:31.656 [2024-06-11 13:36:24.462649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.462673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.462724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.462737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.462768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686295551 len:65536 00:08:31.656 [2024-06-11 13:36:24.462781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.462834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.462847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.656 #48 NEW cov: 12097 ft: 13182 corp: 10/420b lim: 50 exec/s: 0 rss: 72Mb L: 43/47 MS: 1 EraseBytes- 00:08:31.656 [2024-06-11 13:36:24.512795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.512819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.512867] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:31.656 [2024-06-11 13:36:24.512880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.512913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446673704965373951 len:65536 00:08:31.656 [2024-06-11 13:36:24.512924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.512976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.512988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.656 #49 NEW cov: 12097 ft: 13291 corp: 11/467b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 ChangeBit- 00:08:31.656 [2024-06-11 13:36:24.552933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.552958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.553006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:31.656 [2024-06-11 13:36:24.553020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.553051] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:31.656 [2024-06-11 13:36:24.553063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.656 [2024-06-11 13:36:24.553112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073695920127 len:65536 00:08:31.656 [2024-06-11 13:36:24.553128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.916 #50 NEW cov: 12097 ft: 13315 corp: 12/514b lim: 50 exec/s: 0 rss: 72Mb L: 47/47 MS: 1 ChangeBinInt- 00:08:31.916 [2024-06-11 13:36:24.593023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.593048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.593097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:31.916 [2024-06-11 13:36:24.593110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.593140] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446673704965373951 len:65536 00:08:31.916 [2024-06-11 13:36:24.593153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.593204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.593218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.916 #51 NEW cov: 12097 ft: 13404 corp: 13/562b lim: 50 exec/s: 0 rss: 72Mb L: 48/48 MS: 1 CopyPart- 00:08:31.916 [2024-06-11 13:36:24.643146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.643169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.643218] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:31.916 [2024-06-11 13:36:24.643232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.643271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446628624988635135 len:65536 00:08:31.916 [2024-06-11 13:36:24.643283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.643334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.643347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.916 #52 NEW cov: 12097 ft: 13410 corp: 14/609b lim: 50 exec/s: 0 rss: 72Mb L: 47/48 MS: 1 ChangeByte- 00:08:31.916 [2024-06-11 13:36:24.683302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742982787858431 len:256 00:08:31.916 [2024-06-11 13:36:24.683327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.683377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:31.916 [2024-06-11 13:36:24.683391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.683419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.683432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.683489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.683518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.916 #53 NEW cov: 12097 ft: 13425 corp: 15/656b lim: 50 exec/s: 0 rss: 72Mb L: 47/48 MS: 1 CMP- DE: "\001\000"- 00:08:31.916 [2024-06-11 13:36:24.723433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.723462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.723509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:31.916 [2024-06-11 13:36:24.723522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.723545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446673704965373951 len:65536 00:08:31.916 [2024-06-11 13:36:24.723558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.723610] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.723623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.916 #54 NEW cov: 12097 ft: 13453 corp: 16/703b lim: 50 exec/s: 0 rss: 72Mb L: 47/48 MS: 1 CopyPart- 00:08:31.916 [2024-06-11 13:36:24.763566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.763594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.763639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:13056 00:08:31.916 [2024-06-11 13:36:24.763652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.763675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446743798831644671 len:65536 00:08:31.916 [2024-06-11 13:36:24.763688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.763740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.763753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:31.916 #60 NEW cov: 12097 ft: 13472 corp: 17/751b lim: 50 exec/s: 0 rss: 72Mb L: 48/48 MS: 1 InsertByte- 00:08:31.916 [2024-06-11 13:36:24.813688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.813713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.813762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:31.916 [2024-06-11 13:36:24.813775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.813804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446673704965373951 len:65536 00:08:31.916 [2024-06-11 13:36:24.813818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:31.916 [2024-06-11 13:36:24.813869] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:31.916 [2024-06-11 13:36:24.813882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.176 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:32.176 #61 NEW cov: 12120 ft: 13510 corp: 18/799b lim: 50 exec/s: 0 rss: 72Mb L: 48/48 MS: 1 ShuffleBytes- 00:08:32.176 [2024-06-11 13:36:24.863796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:24.863821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.863868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:32.176 [2024-06-11 13:36:24.863882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.863913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709488895 len:65536 00:08:32.176 [2024-06-11 13:36:24.863926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.863975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:24.863988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.176 #62 NEW cov: 12120 ft: 13541 corp: 19/847b lim: 50 exec/s: 0 rss: 72Mb L: 48/48 MS: 1 CrossOver- 00:08:32.176 [2024-06-11 13:36:24.903929] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742982787858431 len:256 00:08:32.176 [2024-06-11 13:36:24.903955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.904002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:32.176 [2024-06-11 13:36:24.904015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.904046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3602879701896396799 len:65536 00:08:32.176 [2024-06-11 13:36:24.904058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.904110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:24.904123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.176 #63 NEW cov: 12120 ft: 13572 corp: 20/895b lim: 50 exec/s: 63 rss: 72Mb L: 48/48 MS: 1 InsertByte- 00:08:32.176 [2024-06-11 13:36:24.954193] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:24.954221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.954279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:24.954292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.954333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65534 00:08:32.176 [2024-06-11 13:36:24.954345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.954397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551613 len:65536 00:08:32.176 [2024-06-11 13:36:24.954427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.954481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:24.954494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:32.176 #64 NEW cov: 12120 ft: 13603 corp: 21/945b lim: 50 exec/s: 64 rss: 72Mb L: 50/50 MS: 1 CrossOver- 00:08:32.176 [2024-06-11 13:36:24.994207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:24.994232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.994282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:24.994295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.994327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:24.994339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:24.994389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:24.994402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.176 #65 NEW cov: 12120 ft: 13625 corp: 22/992b lim: 50 exec/s: 65 rss: 72Mb L: 47/50 MS: 1 CrossOver- 00:08:32.176 [2024-06-11 13:36:25.034322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:25.034348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:25.034396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744039349813247 len:65536 00:08:32.176 [2024-06-11 13:36:25.034410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:25.034440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.176 [2024-06-11 13:36:25.034453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:25.034502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073675997183 len:65536 00:08:32.176 [2024-06-11 13:36:25.034514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.176 #66 NEW cov: 12120 ft: 13651 corp: 23/1039b lim: 50 exec/s: 66 rss: 72Mb L: 47/50 MS: 1 ChangeBit- 00:08:32.176 [2024-06-11 13:36:25.074448] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.176 [2024-06-11 13:36:25.074477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:25.074525] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:32.176 [2024-06-11 13:36:25.074539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:25.074558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446628624988635135 len:65536 00:08:32.176 [2024-06-11 13:36:25.074570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.176 [2024-06-11 13:36:25.074621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709420543 len:65536 00:08:32.176 [2024-06-11 13:36:25.074634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.436 #67 NEW cov: 12120 ft: 13685 corp: 24/1086b lim: 50 exec/s: 67 rss: 72Mb L: 47/50 MS: 1 ChangeBit- 00:08:32.436 [2024-06-11 13:36:25.124486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.436 [2024-06-11 13:36:25.124511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.436 [2024-06-11 13:36:25.124558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744039349813247 len:65536 00:08:32.436 [2024-06-11 13:36:25.124571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.436 [2024-06-11 13:36:25.124594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:32.436 [2024-06-11 13:36:25.124607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.436 #68 NEW cov: 12120 ft: 13960 corp: 25/1122b lim: 50 exec/s: 68 rss: 72Mb L: 36/50 MS: 1 EraseBytes- 00:08:32.436 [2024-06-11 13:36:25.174733] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.436 [2024-06-11 13:36:25.174757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.436 [2024-06-11 13:36:25.174805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:32.436 [2024-06-11 13:36:25.174819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.436 [2024-06-11 13:36:25.174851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.436 [2024-06-11 13:36:25.174864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.436 [2024-06-11 13:36:25.174916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073675997183 len:65536 00:08:32.436 [2024-06-11 13:36:25.174930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.436 #69 NEW cov: 12120 ft: 13968 corp: 26/1169b lim: 50 exec/s: 69 rss: 72Mb L: 47/50 MS: 1 ShuffleBytes- 00:08:32.436 [2024-06-11 13:36:25.214852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.436 [2024-06-11 13:36:25.214876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.436 [2024-06-11 13:36:25.214927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:32.436 [2024-06-11 13:36:25.214943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.436 [2024-06-11 13:36:25.214969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.436 [2024-06-11 13:36:25.214982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.436 [2024-06-11 13:36:25.215031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073695920127 len:65536 00:08:32.436 [2024-06-11 13:36:25.215043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.436 #70 NEW cov: 12120 ft: 14020 corp: 27/1217b lim: 50 exec/s: 70 rss: 72Mb L: 48/50 MS: 1 InsertByte- 00:08:32.436 [2024-06-11 13:36:25.264734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.436 [2024-06-11 13:36:25.264758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.436 [2024-06-11 13:36:25.264809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65282 00:08:32.436 [2024-06-11 13:36:25.264823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.436 #74 NEW cov: 12120 ft: 14277 corp: 28/1246b lim: 50 exec/s: 74 rss: 72Mb L: 29/50 MS: 4 CrossOver-CopyPart-ChangeBinInt-CrossOver- 00:08:32.436 [2024-06-11 13:36:25.305096] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.436 [2024-06-11 13:36:25.305120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.436 [2024-06-11 13:36:25.305170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:32.436 [2024-06-11 13:36:25.305183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.437 [2024-06-11 13:36:25.305225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.437 [2024-06-11 13:36:25.305236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.437 [2024-06-11 13:36:25.305285] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:32.437 [2024-06-11 13:36:25.305298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.437 #75 NEW cov: 12120 ft: 14292 corp: 29/1293b lim: 50 exec/s: 75 rss: 72Mb L: 47/50 MS: 1 ChangeBinInt- 00:08:32.437 [2024-06-11 13:36:25.345191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65318 00:08:32.437 [2024-06-11 13:36:25.345222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.437 [2024-06-11 13:36:25.345272] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744039349813247 len:65536 00:08:32.437 [2024-06-11 13:36:25.345285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.437 [2024-06-11 13:36:25.345317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.437 [2024-06-11 13:36:25.345329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.437 [2024-06-11 13:36:25.345383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073675997183 len:65536 00:08:32.437 [2024-06-11 13:36:25.345396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.696 #76 NEW cov: 12120 ft: 14297 corp: 30/1340b lim: 50 exec/s: 76 rss: 72Mb L: 47/50 MS: 1 ChangeByte- 00:08:32.696 [2024-06-11 13:36:25.385372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.696 [2024-06-11 13:36:25.385397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.696 [2024-06-11 13:36:25.385445] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:32.696 [2024-06-11 13:36:25.385458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.696 [2024-06-11 13:36:25.385492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.696 [2024-06-11 13:36:25.385504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.696 [2024-06-11 13:36:25.385555] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073695920127 len:65536 00:08:32.696 [2024-06-11 13:36:25.385567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.696 #77 NEW cov: 12120 ft: 14301 corp: 31/1388b lim: 50 exec/s: 77 rss: 73Mb L: 48/50 MS: 1 PersAutoDict- DE: "\001\000"- 00:08:32.696 [2024-06-11 13:36:25.435517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.696 [2024-06-11 13:36:25.435542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.696 [2024-06-11 13:36:25.435591] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744039349813247 len:65536 00:08:32.696 [2024-06-11 13:36:25.435605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.696 [2024-06-11 13:36:25.435634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.696 [2024-06-11 13:36:25.435647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.696 [2024-06-11 13:36:25.435697] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073675996159 len:65536 00:08:32.696 [2024-06-11 13:36:25.435710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.696 #78 NEW cov: 12120 ft: 14318 corp: 32/1435b lim: 50 exec/s: 78 rss: 73Mb L: 47/50 MS: 1 ChangeBit- 00:08:32.696 [2024-06-11 13:36:25.475641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.696 [2024-06-11 13:36:25.475665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.696 [2024-06-11 13:36:25.475716] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:32.696 [2024-06-11 13:36:25.475730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.696 [2024-06-11 13:36:25.475761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446628624988635135 len:65536 00:08:32.696 [2024-06-11 13:36:25.475775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.696 [2024-06-11 13:36:25.475825] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446611032802590719 len:65419 00:08:32.696 [2024-06-11 13:36:25.475855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.696 #79 NEW cov: 12120 ft: 14332 corp: 33/1475b lim: 50 exec/s: 79 rss: 73Mb L: 40/50 MS: 1 EraseBytes- 00:08:32.696 [2024-06-11 13:36:25.515724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.696 [2024-06-11 13:36:25.515748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.696 [2024-06-11 13:36:25.515798] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709486335 len:65536 00:08:32.696 [2024-06-11 13:36:25.515812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.697 [2024-06-11 13:36:25.515844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.697 [2024-06-11 13:36:25.515856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.697 [2024-06-11 13:36:25.515907] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073695920127 len:65536 00:08:32.697 [2024-06-11 13:36:25.515919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.697 #80 NEW cov: 12120 ft: 14356 corp: 34/1522b lim: 50 exec/s: 80 rss: 73Mb L: 47/50 MS: 1 ChangeByte- 00:08:32.697 [2024-06-11 13:36:25.555844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65318 00:08:32.697 [2024-06-11 13:36:25.555868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.697 [2024-06-11 13:36:25.555916] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744039349813247 len:65536 00:08:32.697 [2024-06-11 13:36:25.555930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.697 [2024-06-11 13:36:25.555962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.697 [2024-06-11 13:36:25.555973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.697 [2024-06-11 13:36:25.556022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073675997183 len:65536 00:08:32.697 [2024-06-11 13:36:25.556034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.697 #81 NEW cov: 12120 ft: 14368 corp: 35/1569b lim: 50 exec/s: 81 rss: 73Mb L: 47/50 MS: 1 ShuffleBytes- 00:08:32.697 [2024-06-11 13:36:25.606005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.697 [2024-06-11 13:36:25.606030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.697 [2024-06-11 13:36:25.606079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:32.697 [2024-06-11 13:36:25.606091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.697 [2024-06-11 13:36:25.606120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:11520 00:08:32.697 [2024-06-11 13:36:25.606134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.697 [2024-06-11 13:36:25.606185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:32.697 [2024-06-11 13:36:25.606203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.956 #82 NEW cov: 12120 ft: 14371 corp: 36/1616b lim: 50 exec/s: 82 rss: 73Mb L: 47/50 MS: 1 ChangeByte- 00:08:32.956 [2024-06-11 13:36:25.646113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.956 [2024-06-11 13:36:25.646138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.956 [2024-06-11 13:36:25.646184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057594037927680 len:65536 00:08:32.956 [2024-06-11 13:36:25.646203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.956 [2024-06-11 13:36:25.646241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.956 [2024-06-11 13:36:25.646253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.956 [2024-06-11 13:36:25.646304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:32.956 [2024-06-11 13:36:25.646317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.956 #83 NEW cov: 12120 ft: 14378 corp: 37/1663b lim: 50 exec/s: 83 rss: 73Mb L: 47/50 MS: 1 ChangeBinInt- 00:08:32.956 [2024-06-11 13:36:25.686231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.956 [2024-06-11 13:36:25.686257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.956 [2024-06-11 13:36:25.686304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709486335 len:65536 00:08:32.956 [2024-06-11 13:36:25.686317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.956 [2024-06-11 13:36:25.686348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.956 [2024-06-11 13:36:25.686360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.956 [2024-06-11 13:36:25.686410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073695920127 len:65536 00:08:32.956 [2024-06-11 13:36:25.686423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.956 #84 NEW cov: 12120 ft: 14393 corp: 38/1710b lim: 50 exec/s: 84 rss: 73Mb L: 47/50 MS: 1 ChangeBit- 00:08:32.956 [2024-06-11 13:36:25.736362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742982787858431 len:256 00:08:32.956 [2024-06-11 13:36:25.736386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.956 [2024-06-11 13:36:25.736434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:32.956 [2024-06-11 13:36:25.736453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.956 [2024-06-11 13:36:25.736477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3602879701896396799 len:65536 00:08:32.956 [2024-06-11 13:36:25.736490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.957 [2024-06-11 13:36:25.736540] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:32.957 [2024-06-11 13:36:25.736553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.957 #85 NEW cov: 12120 ft: 14402 corp: 39/1758b lim: 50 exec/s: 85 rss: 73Mb L: 48/50 MS: 1 ShuffleBytes- 00:08:32.957 [2024-06-11 13:36:25.786520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:32.957 [2024-06-11 13:36:25.786544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.957 [2024-06-11 13:36:25.786593] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:32.957 [2024-06-11 13:36:25.786606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.957 [2024-06-11 13:36:25.786637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:32.957 [2024-06-11 13:36:25.786649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.957 [2024-06-11 13:36:25.786699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:32.957 [2024-06-11 13:36:25.786712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.957 #86 NEW cov: 12120 ft: 14406 corp: 40/1806b lim: 50 exec/s: 86 rss: 73Mb L: 48/50 MS: 1 InsertByte- 00:08:32.957 [2024-06-11 13:36:25.826681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374686483966590975 len:1 00:08:32.957 [2024-06-11 13:36:25.826707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:32.957 [2024-06-11 13:36:25.826758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744069414649855 len:65536 00:08:32.957 [2024-06-11 13:36:25.826772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:32.957 [2024-06-11 13:36:25.826797] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:17870283321406128127 len:65536 00:08:32.957 [2024-06-11 13:36:25.826811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:32.957 [2024-06-11 13:36:25.826860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:32.957 [2024-06-11 13:36:25.826874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:32.957 #87 NEW cov: 12120 ft: 14450 corp: 41/1849b lim: 50 exec/s: 87 rss: 73Mb L: 43/50 MS: 1 InsertRepeatedBytes- 00:08:33.217 [2024-06-11 13:36:25.876679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446742982787858431 len:256 00:08:33.217 [2024-06-11 13:36:25.876704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.217 [2024-06-11 13:36:25.876753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374686479671623679 len:65536 00:08:33.217 [2024-06-11 13:36:25.876766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.217 [2024-06-11 13:36:25.876788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:3602879701896396799 len:65536 00:08:33.217 [2024-06-11 13:36:25.876800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.217 #88 NEW cov: 12120 ft: 14473 corp: 42/1886b lim: 50 exec/s: 88 rss: 73Mb L: 37/50 MS: 1 EraseBytes- 00:08:33.217 [2024-06-11 13:36:25.926923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:33.217 [2024-06-11 13:36:25.926947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.217 [2024-06-11 13:36:25.926995] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:33.217 [2024-06-11 13:36:25.927008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.217 [2024-06-11 13:36:25.927040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446741874686296063 len:65536 00:08:33.217 [2024-06-11 13:36:25.927053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.217 [2024-06-11 13:36:25.927104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:33.217 [2024-06-11 13:36:25.927116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:33.217 #89 NEW cov: 12120 ft: 14480 corp: 43/1934b lim: 50 exec/s: 44 rss: 73Mb L: 48/50 MS: 1 CopyPart- 00:08:33.217 #89 DONE cov: 12120 ft: 14480 corp: 43/1934b lim: 50 exec/s: 44 rss: 73Mb 00:08:33.217 ###### Recommended dictionary. ###### 00:08:33.217 "\001\000" # Uses: 1 00:08:33.217 ###### End of recommended dictionary. ###### 00:08:33.217 Done 89 runs in 2 second(s) 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4420 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:33.217 13:36:26 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:08:33.476 [2024-06-11 13:36:26.140107] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:33.476 [2024-06-11 13:36:26.140190] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3445851 ] 00:08:33.476 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.476 [2024-06-11 13:36:26.359222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.735 [2024-06-11 13:36:26.443701] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.735 [2024-06-11 13:36:26.507622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.735 [2024-06-11 13:36:26.523994] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:33.735 INFO: Running with entropic power schedule (0xFF, 100). 00:08:33.735 INFO: Seed: 628004352 00:08:33.735 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:33.735 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:33.735 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:33.735 INFO: A corpus is not provided, starting from an empty corpus 00:08:33.735 #2 INITED exec/s: 0 rss: 64Mb 00:08:33.735 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:33.735 This may also happen if the target rejected all inputs we tried so far 00:08:33.735 [2024-06-11 13:36:26.579077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:33.735 [2024-06-11 13:36:26.579124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.735 [2024-06-11 13:36:26.579174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:33.735 [2024-06-11 13:36:26.579210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.735 [2024-06-11 13:36:26.579258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:33.735 [2024-06-11 13:36:26.579281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:33.994 NEW_FUNC[1/688]: 0x4a5130 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:33.994 NEW_FUNC[2/688]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:33.994 #10 NEW cov: 11930 ft: 11930 corp: 2/72b lim: 90 exec/s: 0 rss: 71Mb L: 71/71 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:33.994 [2024-06-11 13:36:26.789494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:33.994 [2024-06-11 13:36:26.789548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.994 [2024-06-11 13:36:26.789600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:33.994 [2024-06-11 13:36:26.789625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.994 #19 NEW cov: 12064 ft: 12818 corp: 3/112b lim: 90 exec/s: 0 rss: 72Mb L: 40/71 MS: 4 ShuffleBytes-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:08:33.994 [2024-06-11 13:36:26.879722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:33.994 [2024-06-11 13:36:26.879772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:33.994 [2024-06-11 13:36:26.879823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:33.994 [2024-06-11 13:36:26.879848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:33.994 [2024-06-11 13:36:26.879893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:33.994 [2024-06-11 13:36:26.879916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.254 #20 NEW cov: 12070 ft: 13037 corp: 4/183b lim: 90 exec/s: 0 rss: 72Mb L: 71/71 MS: 1 ChangeBinInt- 00:08:34.254 [2024-06-11 13:36:26.981045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.254 [2024-06-11 13:36:26.981084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.254 [2024-06-11 13:36:26.981155] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.254 [2024-06-11 13:36:26.981182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.254 [2024-06-11 13:36:26.981270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:34.254 [2024-06-11 13:36:26.981298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.254 [2024-06-11 13:36:26.981382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:34.254 [2024-06-11 13:36:26.981407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.254 #21 NEW cov: 12155 ft: 13728 corp: 5/257b lim: 90 exec/s: 0 rss: 72Mb L: 74/74 MS: 1 InsertRepeatedBytes- 00:08:34.254 [2024-06-11 13:36:27.061282] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.254 [2024-06-11 13:36:27.061319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.254 [2024-06-11 13:36:27.061395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.254 [2024-06-11 13:36:27.061423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.254 [2024-06-11 13:36:27.061505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:34.254 [2024-06-11 13:36:27.061530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.254 [2024-06-11 13:36:27.061611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:34.254 [2024-06-11 13:36:27.061637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.254 #22 NEW cov: 12155 ft: 13796 corp: 6/331b lim: 90 exec/s: 0 rss: 72Mb L: 74/74 MS: 1 ChangeBit- 00:08:34.254 [2024-06-11 13:36:27.141148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.254 [2024-06-11 13:36:27.141183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.254 [2024-06-11 13:36:27.141264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.254 [2024-06-11 13:36:27.141291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.513 #23 NEW cov: 12155 ft: 13989 corp: 7/371b lim: 90 exec/s: 0 rss: 72Mb L: 40/74 MS: 1 ChangeBit- 00:08:34.513 [2024-06-11 13:36:27.221935] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.513 [2024-06-11 13:36:27.221970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.222051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.513 [2024-06-11 13:36:27.222078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.222158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:34.513 [2024-06-11 13:36:27.222184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.222274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:34.513 [2024-06-11 13:36:27.222301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.222383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:34.513 [2024-06-11 13:36:27.222406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:34.513 #24 NEW cov: 12155 ft: 14136 corp: 8/461b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:08:34.513 [2024-06-11 13:36:27.282079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.513 [2024-06-11 13:36:27.282114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.282196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.513 [2024-06-11 13:36:27.282232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.282315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:34.513 [2024-06-11 13:36:27.282342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.282423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:34.513 [2024-06-11 13:36:27.282449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.282530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:34.513 [2024-06-11 13:36:27.282554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:34.513 #25 NEW cov: 12155 ft: 14160 corp: 9/551b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 ChangeByte- 00:08:34.513 [2024-06-11 13:36:27.362273] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.513 [2024-06-11 13:36:27.362308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.362389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.513 [2024-06-11 13:36:27.362416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.362501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:34.513 [2024-06-11 13:36:27.362528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.362614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:34.513 [2024-06-11 13:36:27.362641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.513 [2024-06-11 13:36:27.362723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:34.513 [2024-06-11 13:36:27.362749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:34.513 #26 NEW cov: 12155 ft: 14195 corp: 10/641b lim: 90 exec/s: 0 rss: 72Mb L: 90/90 MS: 1 ChangeBinInt- 00:08:34.514 [2024-06-11 13:36:27.412226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.514 [2024-06-11 13:36:27.412262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.514 [2024-06-11 13:36:27.412337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.514 [2024-06-11 13:36:27.412363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.514 [2024-06-11 13:36:27.412444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:34.514 [2024-06-11 13:36:27.412470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.514 [2024-06-11 13:36:27.412553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:34.514 [2024-06-11 13:36:27.412576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.773 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:34.773 #27 NEW cov: 12172 ft: 14267 corp: 11/714b lim: 90 exec/s: 0 rss: 72Mb L: 73/90 MS: 1 CrossOver- 00:08:34.773 [2024-06-11 13:36:27.492063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.773 [2024-06-11 13:36:27.492100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.773 [2024-06-11 13:36:27.492175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.773 [2024-06-11 13:36:27.492207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.773 #33 NEW cov: 12172 ft: 14278 corp: 12/754b lim: 90 exec/s: 0 rss: 72Mb L: 40/90 MS: 1 ChangeBinInt- 00:08:34.773 [2024-06-11 13:36:27.542652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.773 [2024-06-11 13:36:27.542688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.773 [2024-06-11 13:36:27.542760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.773 [2024-06-11 13:36:27.542789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.773 [2024-06-11 13:36:27.542871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:34.773 [2024-06-11 13:36:27.542897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:34.773 [2024-06-11 13:36:27.542983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:34.773 [2024-06-11 13:36:27.543007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:34.773 #34 NEW cov: 12172 ft: 14324 corp: 13/835b lim: 90 exec/s: 34 rss: 72Mb L: 81/90 MS: 1 CMP- DE: "G\000\000\000\000\000\000\000"- 00:08:34.773 [2024-06-11 13:36:27.622490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:34.773 [2024-06-11 13:36:27.622527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:34.773 [2024-06-11 13:36:27.622601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:34.773 [2024-06-11 13:36:27.622630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:34.773 #35 NEW cov: 12172 ft: 14347 corp: 14/875b lim: 90 exec/s: 35 rss: 72Mb L: 40/90 MS: 1 PersAutoDict- DE: "G\000\000\000\000\000\000\000"- 00:08:35.032 [2024-06-11 13:36:27.702707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.032 [2024-06-11 13:36:27.702743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.032 [2024-06-11 13:36:27.702818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.032 [2024-06-11 13:36:27.702846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.032 #36 NEW cov: 12172 ft: 14371 corp: 15/916b lim: 90 exec/s: 36 rss: 72Mb L: 41/90 MS: 1 InsertByte- 00:08:35.032 [2024-06-11 13:36:27.783332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.032 [2024-06-11 13:36:27.783369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.032 [2024-06-11 13:36:27.783442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.032 [2024-06-11 13:36:27.783470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.032 [2024-06-11 13:36:27.783549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.032 [2024-06-11 13:36:27.783575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.032 [2024-06-11 13:36:27.783659] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.032 [2024-06-11 13:36:27.783686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.032 #37 NEW cov: 12172 ft: 14382 corp: 16/990b lim: 90 exec/s: 37 rss: 73Mb L: 74/90 MS: 1 ChangeBinInt- 00:08:35.032 [2024-06-11 13:36:27.863564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.032 [2024-06-11 13:36:27.863600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.032 [2024-06-11 13:36:27.863676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.032 [2024-06-11 13:36:27.863703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.032 [2024-06-11 13:36:27.863783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.032 [2024-06-11 13:36:27.863811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.032 [2024-06-11 13:36:27.863893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.032 [2024-06-11 13:36:27.863916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.033 #38 NEW cov: 12172 ft: 14479 corp: 17/1064b lim: 90 exec/s: 38 rss: 73Mb L: 74/90 MS: 1 PersAutoDict- DE: "G\000\000\000\000\000\000\000"- 00:08:35.033 [2024-06-11 13:36:27.913933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.033 [2024-06-11 13:36:27.913973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.033 [2024-06-11 13:36:27.914045] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.033 [2024-06-11 13:36:27.914071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.033 [2024-06-11 13:36:27.914151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.033 [2024-06-11 13:36:27.914177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.033 [2024-06-11 13:36:27.914266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.033 [2024-06-11 13:36:27.914298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.033 [2024-06-11 13:36:27.914383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:35.033 [2024-06-11 13:36:27.914405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:35.292 #39 NEW cov: 12172 ft: 14551 corp: 18/1154b lim: 90 exec/s: 39 rss: 73Mb L: 90/90 MS: 1 CopyPart- 00:08:35.292 [2024-06-11 13:36:27.993367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.292 [2024-06-11 13:36:27.993404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.292 #43 NEW cov: 12172 ft: 15373 corp: 19/1172b lim: 90 exec/s: 43 rss: 73Mb L: 18/90 MS: 4 CopyPart-InsertByte-PersAutoDict-PersAutoDict- DE: "G\000\000\000\000\000\000\000"-"G\000\000\000\000\000\000\000"- 00:08:35.292 [2024-06-11 13:36:28.054313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.292 [2024-06-11 13:36:28.054348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.292 [2024-06-11 13:36:28.054428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.292 [2024-06-11 13:36:28.054455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.292 [2024-06-11 13:36:28.054536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.292 [2024-06-11 13:36:28.054564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.292 [2024-06-11 13:36:28.054646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.292 [2024-06-11 13:36:28.054673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.292 [2024-06-11 13:36:28.054755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:35.292 [2024-06-11 13:36:28.054779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:35.292 #44 NEW cov: 12172 ft: 15375 corp: 20/1262b lim: 90 exec/s: 44 rss: 73Mb L: 90/90 MS: 1 ChangeBinInt- 00:08:35.292 [2024-06-11 13:36:28.104254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.292 [2024-06-11 13:36:28.104291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.292 [2024-06-11 13:36:28.104365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.292 [2024-06-11 13:36:28.104392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.292 [2024-06-11 13:36:28.104479] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.292 [2024-06-11 13:36:28.104506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.292 [2024-06-11 13:36:28.104590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.292 [2024-06-11 13:36:28.104613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.292 #45 NEW cov: 12172 ft: 15393 corp: 21/1334b lim: 90 exec/s: 45 rss: 73Mb L: 72/90 MS: 1 InsertByte- 00:08:35.292 [2024-06-11 13:36:28.163988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.292 [2024-06-11 13:36:28.164022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.292 [2024-06-11 13:36:28.164094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.292 [2024-06-11 13:36:28.164121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.292 #46 NEW cov: 12172 ft: 15414 corp: 22/1374b lim: 90 exec/s: 46 rss: 73Mb L: 40/90 MS: 1 CopyPart- 00:08:35.552 [2024-06-11 13:36:28.214781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.552 [2024-06-11 13:36:28.214816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.214896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.552 [2024-06-11 13:36:28.214923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.215004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.552 [2024-06-11 13:36:28.215032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.215113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.552 [2024-06-11 13:36:28.215137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.215226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:35.552 [2024-06-11 13:36:28.215254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:35.552 #47 NEW cov: 12172 ft: 15496 corp: 23/1464b lim: 90 exec/s: 47 rss: 73Mb L: 90/90 MS: 1 ShuffleBytes- 00:08:35.552 [2024-06-11 13:36:28.264740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.552 [2024-06-11 13:36:28.264775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.264852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.552 [2024-06-11 13:36:28.264880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.264962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.552 [2024-06-11 13:36:28.264990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.265071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.552 [2024-06-11 13:36:28.265096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.552 #48 NEW cov: 12172 ft: 15537 corp: 24/1538b lim: 90 exec/s: 48 rss: 73Mb L: 74/90 MS: 1 ChangeBit- 00:08:35.552 [2024-06-11 13:36:28.315063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.552 [2024-06-11 13:36:28.315099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.315179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.552 [2024-06-11 13:36:28.315215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.315297] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.552 [2024-06-11 13:36:28.315324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.315406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.552 [2024-06-11 13:36:28.315431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.315514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:35.552 [2024-06-11 13:36:28.315538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:35.552 #49 NEW cov: 12172 ft: 15579 corp: 25/1628b lim: 90 exec/s: 49 rss: 73Mb L: 90/90 MS: 1 ChangeBit- 00:08:35.552 [2024-06-11 13:36:28.395304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.552 [2024-06-11 13:36:28.395340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.395422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.552 [2024-06-11 13:36:28.395449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.395531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.552 [2024-06-11 13:36:28.395558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.395641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.552 [2024-06-11 13:36:28.395667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.395749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:35.552 [2024-06-11 13:36:28.395774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:35.552 #50 NEW cov: 12172 ft: 15610 corp: 26/1718b lim: 90 exec/s: 50 rss: 73Mb L: 90/90 MS: 1 CopyPart- 00:08:35.552 [2024-06-11 13:36:28.445449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.552 [2024-06-11 13:36:28.445484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.445561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.552 [2024-06-11 13:36:28.445590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.445672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.552 [2024-06-11 13:36:28.445703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.445783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.552 [2024-06-11 13:36:28.445809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.552 [2024-06-11 13:36:28.445892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:35.552 [2024-06-11 13:36:28.445917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:35.812 #51 NEW cov: 12179 ft: 15625 corp: 27/1808b lim: 90 exec/s: 51 rss: 73Mb L: 90/90 MS: 1 CrossOver- 00:08:35.812 [2024-06-11 13:36:28.525664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:35.812 [2024-06-11 13:36:28.525699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:35.812 [2024-06-11 13:36:28.525780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:35.812 [2024-06-11 13:36:28.525807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:35.812 [2024-06-11 13:36:28.525887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:35.812 [2024-06-11 13:36:28.525914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:35.812 [2024-06-11 13:36:28.525994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:35.812 [2024-06-11 13:36:28.526019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:35.812 [2024-06-11 13:36:28.526101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:08:35.812 [2024-06-11 13:36:28.526125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:08:35.812 #52 NEW cov: 12179 ft: 15650 corp: 28/1898b lim: 90 exec/s: 26 rss: 73Mb L: 90/90 MS: 1 CrossOver- 00:08:35.812 #52 DONE cov: 12179 ft: 15650 corp: 28/1898b lim: 90 exec/s: 26 rss: 73Mb 00:08:35.812 ###### Recommended dictionary. ###### 00:08:35.812 "G\000\000\000\000\000\000\000" # Uses: 4 00:08:35.812 ###### End of recommended dictionary. ###### 00:08:35.812 Done 52 runs in 2 second(s) 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4421 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:35.812 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:36.071 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:36.071 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:36.072 13:36:28 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:08:36.072 [2024-06-11 13:36:28.751461] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:36.072 [2024-06-11 13:36:28.751529] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446268 ] 00:08:36.072 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.072 [2024-06-11 13:36:28.969109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.331 [2024-06-11 13:36:29.054013] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.331 [2024-06-11 13:36:29.118260] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.331 [2024-06-11 13:36:29.134629] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:36.331 INFO: Running with entropic power schedule (0xFF, 100). 00:08:36.331 INFO: Seed: 3238012992 00:08:36.331 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:36.331 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:36.331 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:36.331 INFO: A corpus is not provided, starting from an empty corpus 00:08:36.331 #2 INITED exec/s: 0 rss: 64Mb 00:08:36.331 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:36.331 This may also happen if the target rejected all inputs we tried so far 00:08:36.331 [2024-06-11 13:36:29.201692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.331 [2024-06-11 13:36:29.201746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.590 NEW_FUNC[1/688]: 0x4a8350 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:36.590 NEW_FUNC[2/688]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:36.590 #17 NEW cov: 11909 ft: 11902 corp: 2/20b lim: 50 exec/s: 0 rss: 72Mb L: 19/19 MS: 5 InsertByte-ChangeBinInt-CopyPart-ChangeBinInt-InsertRepeatedBytes- 00:08:36.590 [2024-06-11 13:36:29.422896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.590 [2024-06-11 13:36:29.422960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.590 [2024-06-11 13:36:29.423075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:36.590 [2024-06-11 13:36:29.423104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.590 #18 NEW cov: 12039 ft: 13348 corp: 3/40b lim: 50 exec/s: 0 rss: 72Mb L: 20/20 MS: 1 InsertByte- 00:08:36.849 [2024-06-11 13:36:29.513327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.849 [2024-06-11 13:36:29.513377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.849 [2024-06-11 13:36:29.513459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:36.849 [2024-06-11 13:36:29.513482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:36.849 [2024-06-11 13:36:29.513579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:36.849 [2024-06-11 13:36:29.513600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:36.849 #24 NEW cov: 12045 ft: 13811 corp: 4/78b lim: 50 exec/s: 0 rss: 72Mb L: 38/38 MS: 1 CrossOver- 00:08:36.849 [2024-06-11 13:36:29.603077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.849 [2024-06-11 13:36:29.603112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.849 #26 NEW cov: 12130 ft: 14201 corp: 5/88b lim: 50 exec/s: 0 rss: 72Mb L: 10/38 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:36.849 [2024-06-11 13:36:29.673446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.849 [2024-06-11 13:36:29.673481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:36.849 #27 NEW cov: 12130 ft: 14324 corp: 6/100b lim: 50 exec/s: 0 rss: 72Mb L: 12/38 MS: 1 CrossOver- 00:08:36.849 [2024-06-11 13:36:29.733750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:36.849 [2024-06-11 13:36:29.733786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.109 #29 NEW cov: 12130 ft: 14366 corp: 7/112b lim: 50 exec/s: 0 rss: 72Mb L: 12/38 MS: 2 CrossOver-CrossOver- 00:08:37.109 [2024-06-11 13:36:29.824241] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.109 [2024-06-11 13:36:29.824279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.109 #30 NEW cov: 12130 ft: 14445 corp: 8/131b lim: 50 exec/s: 0 rss: 72Mb L: 19/38 MS: 1 CopyPart- 00:08:37.109 [2024-06-11 13:36:29.894568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.109 [2024-06-11 13:36:29.894604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.109 #31 NEW cov: 12130 ft: 14528 corp: 9/144b lim: 50 exec/s: 0 rss: 72Mb L: 13/38 MS: 1 InsertByte- 00:08:37.109 [2024-06-11 13:36:29.986092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.109 [2024-06-11 13:36:29.986129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.109 [2024-06-11 13:36:29.986230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.109 [2024-06-11 13:36:29.986249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.109 [2024-06-11 13:36:29.986355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:37.109 [2024-06-11 13:36:29.986376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.368 #32 NEW cov: 12130 ft: 14585 corp: 10/183b lim: 50 exec/s: 0 rss: 72Mb L: 39/39 MS: 1 CrossOver- 00:08:37.368 [2024-06-11 13:36:30.075898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.369 [2024-06-11 13:36:30.075939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.369 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:37.369 #33 NEW cov: 12156 ft: 14883 corp: 11/202b lim: 50 exec/s: 0 rss: 72Mb L: 19/39 MS: 1 ShuffleBytes- 00:08:37.369 [2024-06-11 13:36:30.166176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.369 [2024-06-11 13:36:30.166227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.369 #34 NEW cov: 12156 ft: 14905 corp: 12/215b lim: 50 exec/s: 34 rss: 72Mb L: 13/39 MS: 1 ChangeBinInt- 00:08:37.369 [2024-06-11 13:36:30.257101] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.369 [2024-06-11 13:36:30.257139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.369 [2024-06-11 13:36:30.257220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.369 [2024-06-11 13:36:30.257243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.628 #35 NEW cov: 12156 ft: 14952 corp: 13/240b lim: 50 exec/s: 35 rss: 72Mb L: 25/39 MS: 1 InsertRepeatedBytes- 00:08:37.628 [2024-06-11 13:36:30.327065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.628 [2024-06-11 13:36:30.327101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.628 #36 NEW cov: 12156 ft: 15004 corp: 14/254b lim: 50 exec/s: 36 rss: 73Mb L: 14/39 MS: 1 InsertByte- 00:08:37.628 [2024-06-11 13:36:30.418780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.628 [2024-06-11 13:36:30.418818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.628 [2024-06-11 13:36:30.418910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:37.628 [2024-06-11 13:36:30.418932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:37.628 [2024-06-11 13:36:30.419017] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:37.628 [2024-06-11 13:36:30.419039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:37.628 [2024-06-11 13:36:30.419142] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:37.628 [2024-06-11 13:36:30.419166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:37.628 #37 NEW cov: 12156 ft: 15303 corp: 15/301b lim: 50 exec/s: 37 rss: 73Mb L: 47/47 MS: 1 CMP- DE: "\000\003\345\341\376\236\000z"- 00:08:37.628 [2024-06-11 13:36:30.508109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.628 [2024-06-11 13:36:30.508147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.887 #38 NEW cov: 12156 ft: 15363 corp: 16/313b lim: 50 exec/s: 38 rss: 73Mb L: 12/47 MS: 1 ChangeBit- 00:08:37.887 [2024-06-11 13:36:30.568652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.887 [2024-06-11 13:36:30.568691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.887 #39 NEW cov: 12156 ft: 15381 corp: 17/323b lim: 50 exec/s: 39 rss: 73Mb L: 10/47 MS: 1 EraseBytes- 00:08:37.887 [2024-06-11 13:36:30.658992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.887 [2024-06-11 13:36:30.659029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.887 #40 NEW cov: 12156 ft: 15392 corp: 18/342b lim: 50 exec/s: 40 rss: 73Mb L: 19/47 MS: 1 PersAutoDict- DE: "\000\003\345\341\376\236\000z"- 00:08:37.887 [2024-06-11 13:36:30.749520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:37.887 [2024-06-11 13:36:30.749563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:37.887 #44 NEW cov: 12156 ft: 15401 corp: 19/359b lim: 50 exec/s: 44 rss: 73Mb L: 17/47 MS: 4 ChangeBit-CMP-ShuffleBytes-CMP- DE: "\377\377~\"X\020\003Q"-"\001\000\000\000\000\000\000\000"- 00:08:38.146 [2024-06-11 13:36:30.810044] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.146 [2024-06-11 13:36:30.810080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.146 #45 NEW cov: 12156 ft: 15429 corp: 20/378b lim: 50 exec/s: 45 rss: 73Mb L: 19/47 MS: 1 ChangeBinInt- 00:08:38.146 [2024-06-11 13:36:30.870358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.146 [2024-06-11 13:36:30.870395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.146 #46 NEW cov: 12156 ft: 15519 corp: 21/388b lim: 50 exec/s: 46 rss: 73Mb L: 10/47 MS: 1 ChangeBinInt- 00:08:38.146 [2024-06-11 13:36:30.961211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.146 [2024-06-11 13:36:30.961247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.146 [2024-06-11 13:36:30.961334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:38.146 [2024-06-11 13:36:30.961359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.146 #47 NEW cov: 12156 ft: 15524 corp: 22/408b lim: 50 exec/s: 47 rss: 73Mb L: 20/47 MS: 1 ChangeBit- 00:08:38.146 [2024-06-11 13:36:31.031756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.146 [2024-06-11 13:36:31.031791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.146 [2024-06-11 13:36:31.031878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:38.146 [2024-06-11 13:36:31.031900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.146 [2024-06-11 13:36:31.031974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:38.146 [2024-06-11 13:36:31.031996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.405 #48 NEW cov: 12156 ft: 15538 corp: 23/446b lim: 50 exec/s: 48 rss: 73Mb L: 38/47 MS: 1 InsertRepeatedBytes- 00:08:38.406 [2024-06-11 13:36:31.122839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:38.406 [2024-06-11 13:36:31.122877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.406 [2024-06-11 13:36:31.122969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:38.406 [2024-06-11 13:36:31.122995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:38.406 [2024-06-11 13:36:31.123088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:38.406 [2024-06-11 13:36:31.123108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:38.406 [2024-06-11 13:36:31.123218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:38.406 [2024-06-11 13:36:31.123246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:38.406 #49 NEW cov: 12156 ft: 15557 corp: 24/487b lim: 50 exec/s: 24 rss: 74Mb L: 41/47 MS: 1 InsertRepeatedBytes- 00:08:38.406 #49 DONE cov: 12156 ft: 15557 corp: 24/487b lim: 50 exec/s: 24 rss: 74Mb 00:08:38.406 ###### Recommended dictionary. ###### 00:08:38.406 "\000\003\345\341\376\236\000z" # Uses: 1 00:08:38.406 "\377\377~\"X\020\003Q" # Uses: 0 00:08:38.406 "\001\000\000\000\000\000\000\000" # Uses: 0 00:08:38.406 ###### End of recommended dictionary. ###### 00:08:38.406 Done 49 runs in 2 second(s) 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4422 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:38.665 13:36:31 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:08:38.665 [2024-06-11 13:36:31.352268] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:38.665 [2024-06-11 13:36:31.352316] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446704 ] 00:08:38.665 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.665 [2024-06-11 13:36:31.545364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.925 [2024-06-11 13:36:31.631487] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.925 [2024-06-11 13:36:31.695608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.925 [2024-06-11 13:36:31.711952] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:38.925 INFO: Running with entropic power schedule (0xFF, 100). 00:08:38.925 INFO: Seed: 1519027285 00:08:38.925 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:38.925 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:38.925 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:38.925 INFO: A corpus is not provided, starting from an empty corpus 00:08:38.925 #2 INITED exec/s: 0 rss: 65Mb 00:08:38.925 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:38.925 This may also happen if the target rejected all inputs we tried so far 00:08:38.925 [2024-06-11 13:36:31.761124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:38.925 [2024-06-11 13:36:31.761170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:38.925 [2024-06-11 13:36:31.761256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:38.925 [2024-06-11 13:36:31.761281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.184 NEW_FUNC[1/687]: 0x4aa610 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:39.184 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:39.185 #14 NEW cov: 11909 ft: 11933 corp: 2/39b lim: 85 exec/s: 0 rss: 72Mb L: 38/38 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:39.185 [2024-06-11 13:36:31.921536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.185 [2024-06-11 13:36:31.921584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.185 NEW_FUNC[1/1]: 0xf6bae0 in spdk_ring_dequeue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:416 00:08:39.185 #22 NEW cov: 12065 ft: 13319 corp: 3/62b lim: 85 exec/s: 0 rss: 72Mb L: 23/38 MS: 3 ChangeBit-InsertByte-CrossOver- 00:08:39.185 [2024-06-11 13:36:31.981642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.185 [2024-06-11 13:36:31.981679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.185 #23 NEW cov: 12071 ft: 13570 corp: 4/85b lim: 85 exec/s: 0 rss: 72Mb L: 23/38 MS: 1 ChangeBinInt- 00:08:39.185 [2024-06-11 13:36:32.062064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.185 [2024-06-11 13:36:32.062100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.185 [2024-06-11 13:36:32.062177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.185 [2024-06-11 13:36:32.062210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.442 #24 NEW cov: 12156 ft: 13812 corp: 5/133b lim: 85 exec/s: 0 rss: 72Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:08:39.443 [2024-06-11 13:36:32.142295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.443 [2024-06-11 13:36:32.142331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.443 [2024-06-11 13:36:32.142405] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.443 [2024-06-11 13:36:32.142432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.443 #25 NEW cov: 12156 ft: 13917 corp: 6/176b lim: 85 exec/s: 0 rss: 72Mb L: 43/48 MS: 1 CopyPart- 00:08:39.443 [2024-06-11 13:36:32.192445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.443 [2024-06-11 13:36:32.192481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.443 [2024-06-11 13:36:32.192564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.443 [2024-06-11 13:36:32.192592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.443 #26 NEW cov: 12156 ft: 14016 corp: 7/214b lim: 85 exec/s: 0 rss: 72Mb L: 38/48 MS: 1 CrossOver- 00:08:39.443 [2024-06-11 13:36:32.242555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.443 [2024-06-11 13:36:32.242595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.443 [2024-06-11 13:36:32.242675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.443 [2024-06-11 13:36:32.242703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.443 #27 NEW cov: 12156 ft: 14070 corp: 8/257b lim: 85 exec/s: 0 rss: 72Mb L: 43/48 MS: 1 ChangeByte- 00:08:39.443 [2024-06-11 13:36:32.322802] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.443 [2024-06-11 13:36:32.322838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.443 [2024-06-11 13:36:32.322913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.443 [2024-06-11 13:36:32.322940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.701 #28 NEW cov: 12156 ft: 14094 corp: 9/293b lim: 85 exec/s: 0 rss: 72Mb L: 36/48 MS: 1 EraseBytes- 00:08:39.701 [2024-06-11 13:36:32.372924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.701 [2024-06-11 13:36:32.372961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.701 [2024-06-11 13:36:32.373037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.701 [2024-06-11 13:36:32.373065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.701 #29 NEW cov: 12156 ft: 14141 corp: 10/329b lim: 85 exec/s: 0 rss: 72Mb L: 36/48 MS: 1 ChangeBinInt- 00:08:39.701 [2024-06-11 13:36:32.443386] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.701 [2024-06-11 13:36:32.443422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.701 [2024-06-11 13:36:32.443491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.701 [2024-06-11 13:36:32.443519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.701 [2024-06-11 13:36:32.443603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:39.701 [2024-06-11 13:36:32.443629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.701 #30 NEW cov: 12156 ft: 14538 corp: 11/381b lim: 85 exec/s: 0 rss: 72Mb L: 52/52 MS: 1 InsertRepeatedBytes- 00:08:39.701 [2024-06-11 13:36:32.493705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.701 [2024-06-11 13:36:32.493741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.701 [2024-06-11 13:36:32.493814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.701 [2024-06-11 13:36:32.493840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.701 [2024-06-11 13:36:32.493922] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:39.701 [2024-06-11 13:36:32.493948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.701 [2024-06-11 13:36:32.494030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:39.701 [2024-06-11 13:36:32.494054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.701 #32 NEW cov: 12156 ft: 14893 corp: 12/450b lim: 85 exec/s: 0 rss: 72Mb L: 69/69 MS: 2 CMP-InsertRepeatedBytes- DE: "\377\003"- 00:08:39.701 [2024-06-11 13:36:32.553643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.701 [2024-06-11 13:36:32.553679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.701 [2024-06-11 13:36:32.553753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.701 [2024-06-11 13:36:32.553780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.701 [2024-06-11 13:36:32.553864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:39.701 [2024-06-11 13:36:32.553894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.959 #33 NEW cov: 12156 ft: 14916 corp: 13/502b lim: 85 exec/s: 0 rss: 72Mb L: 52/69 MS: 1 ChangeByte- 00:08:39.959 [2024-06-11 13:36:32.634063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.959 [2024-06-11 13:36:32.634098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.959 [2024-06-11 13:36:32.634177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.959 [2024-06-11 13:36:32.634208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.959 [2024-06-11 13:36:32.634290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:39.959 [2024-06-11 13:36:32.634318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:39.959 [2024-06-11 13:36:32.634402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:08:39.959 [2024-06-11 13:36:32.634429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:39.959 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:39.959 #34 NEW cov: 12179 ft: 14956 corp: 14/571b lim: 85 exec/s: 0 rss: 72Mb L: 69/69 MS: 1 ChangeBinInt- 00:08:39.959 [2024-06-11 13:36:32.713916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.959 [2024-06-11 13:36:32.713952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.959 [2024-06-11 13:36:32.714030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.959 [2024-06-11 13:36:32.714058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.959 #35 NEW cov: 12179 ft: 14989 corp: 15/615b lim: 85 exec/s: 35 rss: 72Mb L: 44/69 MS: 1 InsertByte- 00:08:39.959 [2024-06-11 13:36:32.764046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.959 [2024-06-11 13:36:32.764082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.959 [2024-06-11 13:36:32.764157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.959 [2024-06-11 13:36:32.764184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.959 #36 NEW cov: 12179 ft: 15003 corp: 16/663b lim: 85 exec/s: 36 rss: 72Mb L: 48/69 MS: 1 CopyPart- 00:08:39.959 [2024-06-11 13:36:32.834312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:39.959 [2024-06-11 13:36:32.834353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:39.960 [2024-06-11 13:36:32.834434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:39.960 [2024-06-11 13:36:32.834462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:39.960 #37 NEW cov: 12179 ft: 15017 corp: 17/707b lim: 85 exec/s: 37 rss: 72Mb L: 44/69 MS: 1 CMP- DE: "\334\221\206\341\336\345\003\000"- 00:08:40.219 [2024-06-11 13:36:32.884624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.219 [2024-06-11 13:36:32.884662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.219 [2024-06-11 13:36:32.884736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.219 [2024-06-11 13:36:32.884763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.219 [2024-06-11 13:36:32.884850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.219 [2024-06-11 13:36:32.884881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.219 #38 NEW cov: 12179 ft: 15034 corp: 18/759b lim: 85 exec/s: 38 rss: 73Mb L: 52/69 MS: 1 ChangeBit- 00:08:40.219 [2024-06-11 13:36:32.964459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.219 [2024-06-11 13:36:32.964496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.219 #43 NEW cov: 12179 ft: 15071 corp: 19/777b lim: 85 exec/s: 43 rss: 73Mb L: 18/69 MS: 5 CMP-CopyPart-CMP-CopyPart-PersAutoDict- DE: "\377\377\377\377\377\377\377\016"-"\000\000"-"\377\003"- 00:08:40.219 [2024-06-11 13:36:33.014715] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.219 [2024-06-11 13:36:33.014752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.219 [2024-06-11 13:36:33.014827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.219 [2024-06-11 13:36:33.014855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.219 #44 NEW cov: 12179 ft: 15092 corp: 20/826b lim: 85 exec/s: 44 rss: 73Mb L: 49/69 MS: 1 InsertByte- 00:08:40.219 [2024-06-11 13:36:33.064649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.219 [2024-06-11 13:36:33.064686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.219 #45 NEW cov: 12179 ft: 15129 corp: 21/853b lim: 85 exec/s: 45 rss: 73Mb L: 27/69 MS: 1 EraseBytes- 00:08:40.478 [2024-06-11 13:36:33.145081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.478 [2024-06-11 13:36:33.145118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.478 [2024-06-11 13:36:33.145192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.478 [2024-06-11 13:36:33.145225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.478 #46 NEW cov: 12179 ft: 15143 corp: 22/892b lim: 85 exec/s: 46 rss: 73Mb L: 39/69 MS: 1 CrossOver- 00:08:40.478 [2024-06-11 13:36:33.215504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.478 [2024-06-11 13:36:33.215540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.478 [2024-06-11 13:36:33.215620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.478 [2024-06-11 13:36:33.215650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.478 [2024-06-11 13:36:33.215734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:40.478 [2024-06-11 13:36:33.215762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:40.478 #47 NEW cov: 12179 ft: 15173 corp: 23/944b lim: 85 exec/s: 47 rss: 73Mb L: 52/69 MS: 1 CrossOver- 00:08:40.478 [2024-06-11 13:36:33.295545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.478 [2024-06-11 13:36:33.295580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.478 [2024-06-11 13:36:33.295654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.478 [2024-06-11 13:36:33.295682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.478 #48 NEW cov: 12179 ft: 15181 corp: 24/992b lim: 85 exec/s: 48 rss: 73Mb L: 48/69 MS: 1 ShuffleBytes- 00:08:40.478 [2024-06-11 13:36:33.345653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.478 [2024-06-11 13:36:33.345687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.478 [2024-06-11 13:36:33.345763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.479 [2024-06-11 13:36:33.345790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.738 #49 NEW cov: 12179 ft: 15189 corp: 25/1036b lim: 85 exec/s: 49 rss: 73Mb L: 44/69 MS: 1 ShuffleBytes- 00:08:40.738 [2024-06-11 13:36:33.415674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.738 [2024-06-11 13:36:33.415710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.738 #50 NEW cov: 12179 ft: 15210 corp: 26/1055b lim: 85 exec/s: 50 rss: 73Mb L: 19/69 MS: 1 InsertByte- 00:08:40.738 [2024-06-11 13:36:33.495938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.738 [2024-06-11 13:36:33.495976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.738 #51 NEW cov: 12179 ft: 15263 corp: 27/1088b lim: 85 exec/s: 51 rss: 73Mb L: 33/69 MS: 1 InsertRepeatedBytes- 00:08:40.738 [2024-06-11 13:36:33.576328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.738 [2024-06-11 13:36:33.576363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.738 [2024-06-11 13:36:33.576438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.738 [2024-06-11 13:36:33.576466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.738 #52 NEW cov: 12179 ft: 15278 corp: 28/1136b lim: 85 exec/s: 52 rss: 74Mb L: 48/69 MS: 1 PersAutoDict- DE: "\377\003"- 00:08:40.738 [2024-06-11 13:36:33.646535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.738 [2024-06-11 13:36:33.646570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.738 [2024-06-11 13:36:33.646645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.738 [2024-06-11 13:36:33.646678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.997 #53 NEW cov: 12179 ft: 15287 corp: 29/1184b lim: 85 exec/s: 53 rss: 74Mb L: 48/69 MS: 1 CopyPart- 00:08:40.997 [2024-06-11 13:36:33.716723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:40.997 [2024-06-11 13:36:33.716758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:40.997 [2024-06-11 13:36:33.716831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:40.997 [2024-06-11 13:36:33.716859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:40.997 #54 NEW cov: 12179 ft: 15319 corp: 30/1232b lim: 85 exec/s: 27 rss: 74Mb L: 48/69 MS: 1 ShuffleBytes- 00:08:40.997 #54 DONE cov: 12179 ft: 15319 corp: 30/1232b lim: 85 exec/s: 27 rss: 74Mb 00:08:40.997 ###### Recommended dictionary. ###### 00:08:40.997 "\377\003" # Uses: 2 00:08:40.997 "\334\221\206\341\336\345\003\000" # Uses: 0 00:08:40.997 "\377\377\377\377\377\377\377\016" # Uses: 0 00:08:40.997 "\000\000" # Uses: 0 00:08:40.997 ###### End of recommended dictionary. ###### 00:08:40.997 Done 54 runs in 2 second(s) 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:08:40.997 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4423 00:08:41.256 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:41.256 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:41.256 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:41.256 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:41.256 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:41.256 13:36:33 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:08:41.256 [2024-06-11 13:36:33.939113] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:41.256 [2024-06-11 13:36:33.939175] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447139 ] 00:08:41.256 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.256 [2024-06-11 13:36:34.153060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.516 [2024-06-11 13:36:34.237492] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.516 [2024-06-11 13:36:34.301581] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.516 [2024-06-11 13:36:34.317932] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:41.516 INFO: Running with entropic power schedule (0xFF, 100). 00:08:41.516 INFO: Seed: 4127034743 00:08:41.516 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:41.516 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:41.516 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:41.516 INFO: A corpus is not provided, starting from an empty corpus 00:08:41.516 #2 INITED exec/s: 0 rss: 64Mb 00:08:41.516 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:41.516 This may also happen if the target rejected all inputs we tried so far 00:08:41.516 [2024-06-11 13:36:34.384908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:41.516 [2024-06-11 13:36:34.384962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.775 NEW_FUNC[1/687]: 0x4ad840 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:41.775 NEW_FUNC[2/687]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:41.775 #12 NEW cov: 11868 ft: 11868 corp: 2/10b lim: 25 exec/s: 0 rss: 71Mb L: 9/9 MS: 5 ShuffleBytes-ChangeBinInt-ChangeByte-ChangeBinInt-CMP- DE: "\000\003\345\337\241\315\271\304"- 00:08:41.775 [2024-06-11 13:36:34.616414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:41.775 [2024-06-11 13:36:34.616468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:41.775 [2024-06-11 13:36:34.616571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:41.775 [2024-06-11 13:36:34.616589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:41.775 [2024-06-11 13:36:34.616690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:41.775 [2024-06-11 13:36:34.616710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:41.775 #13 NEW cov: 11998 ft: 12871 corp: 3/28b lim: 25 exec/s: 0 rss: 71Mb L: 18/18 MS: 1 CopyPart- 00:08:42.034 [2024-06-11 13:36:34.706853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.034 [2024-06-11 13:36:34.706891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.034 [2024-06-11 13:36:34.706986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.034 [2024-06-11 13:36:34.707010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.034 [2024-06-11 13:36:34.707102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.034 [2024-06-11 13:36:34.707124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.034 [2024-06-11 13:36:34.707228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:42.034 [2024-06-11 13:36:34.707251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:42.034 #16 NEW cov: 12004 ft: 13540 corp: 4/51b lim: 25 exec/s: 0 rss: 71Mb L: 23/23 MS: 3 CMP-ChangeByte-InsertRepeatedBytes- DE: "\005\000"- 00:08:42.034 [2024-06-11 13:36:34.777039] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.034 [2024-06-11 13:36:34.777077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.034 [2024-06-11 13:36:34.777159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.034 [2024-06-11 13:36:34.777184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.034 [2024-06-11 13:36:34.777260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.034 [2024-06-11 13:36:34.777284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.034 #17 NEW cov: 12089 ft: 13769 corp: 5/69b lim: 25 exec/s: 0 rss: 71Mb L: 18/23 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:08:42.034 [2024-06-11 13:36:34.867373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.034 [2024-06-11 13:36:34.867409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.034 [2024-06-11 13:36:34.867496] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.034 [2024-06-11 13:36:34.867522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.034 [2024-06-11 13:36:34.867615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.034 [2024-06-11 13:36:34.867636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.034 #18 NEW cov: 12089 ft: 13820 corp: 6/87b lim: 25 exec/s: 0 rss: 72Mb L: 18/23 MS: 1 ChangeByte- 00:08:42.293 [2024-06-11 13:36:34.958108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.293 [2024-06-11 13:36:34.958147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.293 [2024-06-11 13:36:34.958238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.293 [2024-06-11 13:36:34.958263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.293 [2024-06-11 13:36:34.958339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.293 [2024-06-11 13:36:34.958360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.293 #19 NEW cov: 12089 ft: 13925 corp: 7/105b lim: 25 exec/s: 0 rss: 72Mb L: 18/23 MS: 1 ChangeByte- 00:08:42.293 [2024-06-11 13:36:35.048577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.293 [2024-06-11 13:36:35.048617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.293 [2024-06-11 13:36:35.048702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.293 [2024-06-11 13:36:35.048728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.293 [2024-06-11 13:36:35.048820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.293 [2024-06-11 13:36:35.048844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.293 [2024-06-11 13:36:35.048951] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:42.293 [2024-06-11 13:36:35.048977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:42.293 #20 NEW cov: 12089 ft: 14014 corp: 8/129b lim: 25 exec/s: 0 rss: 72Mb L: 24/24 MS: 1 InsertByte- 00:08:42.293 [2024-06-11 13:36:35.138916] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.293 [2024-06-11 13:36:35.138954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.293 [2024-06-11 13:36:35.139042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.293 [2024-06-11 13:36:35.139064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.293 [2024-06-11 13:36:35.139141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.293 [2024-06-11 13:36:35.139168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.293 #21 NEW cov: 12089 ft: 14091 corp: 9/147b lim: 25 exec/s: 0 rss: 72Mb L: 18/24 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:08:42.552 [2024-06-11 13:36:35.229641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.552 [2024-06-11 13:36:35.229679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.553 [2024-06-11 13:36:35.229756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.553 [2024-06-11 13:36:35.229780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.553 [2024-06-11 13:36:35.229842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.553 [2024-06-11 13:36:35.229863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.553 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:42.553 #27 NEW cov: 12112 ft: 14145 corp: 10/165b lim: 25 exec/s: 0 rss: 72Mb L: 18/24 MS: 1 ChangeBinInt- 00:08:42.553 [2024-06-11 13:36:35.319511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.553 [2024-06-11 13:36:35.319547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.553 #28 NEW cov: 12112 ft: 14237 corp: 11/174b lim: 25 exec/s: 28 rss: 72Mb L: 9/24 MS: 1 ChangeBit- 00:08:42.553 [2024-06-11 13:36:35.391168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.553 [2024-06-11 13:36:35.391207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.553 [2024-06-11 13:36:35.391301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.553 [2024-06-11 13:36:35.391326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.553 [2024-06-11 13:36:35.391422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.553 [2024-06-11 13:36:35.391442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.553 [2024-06-11 13:36:35.391551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:42.553 [2024-06-11 13:36:35.391576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:42.553 #29 NEW cov: 12112 ft: 14284 corp: 12/198b lim: 25 exec/s: 29 rss: 72Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:08:42.553 [2024-06-11 13:36:35.461440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.553 [2024-06-11 13:36:35.461476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.553 [2024-06-11 13:36:35.461563] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.553 [2024-06-11 13:36:35.461589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.553 [2024-06-11 13:36:35.461656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.553 [2024-06-11 13:36:35.461680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.815 #30 NEW cov: 12112 ft: 14294 corp: 13/217b lim: 25 exec/s: 30 rss: 72Mb L: 19/24 MS: 1 InsertByte- 00:08:42.815 [2024-06-11 13:36:35.552030] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.815 [2024-06-11 13:36:35.552065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.815 [2024-06-11 13:36:35.552167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.815 [2024-06-11 13:36:35.552193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.815 [2024-06-11 13:36:35.552290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.815 [2024-06-11 13:36:35.552312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.815 [2024-06-11 13:36:35.552421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:42.815 [2024-06-11 13:36:35.552445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:42.815 #36 NEW cov: 12112 ft: 14329 corp: 14/239b lim: 25 exec/s: 36 rss: 72Mb L: 22/24 MS: 1 EraseBytes- 00:08:42.815 [2024-06-11 13:36:35.641968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:42.815 [2024-06-11 13:36:35.642007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:42.815 [2024-06-11 13:36:35.642095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:42.815 [2024-06-11 13:36:35.642117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:42.815 [2024-06-11 13:36:35.642192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:42.815 [2024-06-11 13:36:35.642218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:42.815 #37 NEW cov: 12112 ft: 14350 corp: 15/258b lim: 25 exec/s: 37 rss: 72Mb L: 19/24 MS: 1 CMP- DE: "\347\346)>\340\345\003\000"- 00:08:43.073 [2024-06-11 13:36:35.732562] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.074 [2024-06-11 13:36:35.732599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.074 [2024-06-11 13:36:35.732688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.074 [2024-06-11 13:36:35.732718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.074 [2024-06-11 13:36:35.732801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.074 [2024-06-11 13:36:35.732822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.074 [2024-06-11 13:36:35.732926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:43.074 [2024-06-11 13:36:35.732946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:43.074 #38 NEW cov: 12112 ft: 14360 corp: 16/282b lim: 25 exec/s: 38 rss: 72Mb L: 24/24 MS: 1 CrossOver- 00:08:43.074 [2024-06-11 13:36:35.802800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.074 [2024-06-11 13:36:35.802836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.074 [2024-06-11 13:36:35.802930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.074 [2024-06-11 13:36:35.802954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.074 [2024-06-11 13:36:35.803029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.074 [2024-06-11 13:36:35.803051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.074 #39 NEW cov: 12112 ft: 14398 corp: 17/301b lim: 25 exec/s: 39 rss: 72Mb L: 19/24 MS: 1 InsertByte- 00:08:43.074 [2024-06-11 13:36:35.863063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.074 [2024-06-11 13:36:35.863102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.074 [2024-06-11 13:36:35.863197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.074 [2024-06-11 13:36:35.863227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.074 [2024-06-11 13:36:35.863316] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.074 [2024-06-11 13:36:35.863337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.074 #40 NEW cov: 12112 ft: 14443 corp: 18/318b lim: 25 exec/s: 40 rss: 72Mb L: 17/24 MS: 1 PersAutoDict- DE: "\000\003\345\337\241\315\271\304"- 00:08:43.074 [2024-06-11 13:36:35.953637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.074 [2024-06-11 13:36:35.953673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.074 [2024-06-11 13:36:35.953753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.074 [2024-06-11 13:36:35.953774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.074 [2024-06-11 13:36:35.953846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.074 [2024-06-11 13:36:35.953868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.333 #41 NEW cov: 12112 ft: 14455 corp: 19/336b lim: 25 exec/s: 41 rss: 72Mb L: 18/24 MS: 1 ChangeBinInt- 00:08:43.333 [2024-06-11 13:36:36.014533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.333 [2024-06-11 13:36:36.014571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.333 [2024-06-11 13:36:36.014660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.333 [2024-06-11 13:36:36.014686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.333 [2024-06-11 13:36:36.014767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.333 [2024-06-11 13:36:36.014790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.333 [2024-06-11 13:36:36.014889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:43.333 [2024-06-11 13:36:36.014915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:43.333 #42 NEW cov: 12112 ft: 14474 corp: 20/356b lim: 25 exec/s: 42 rss: 72Mb L: 20/24 MS: 1 InsertByte- 00:08:43.333 [2024-06-11 13:36:36.104117] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.333 [2024-06-11 13:36:36.104154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.333 [2024-06-11 13:36:36.104243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.333 [2024-06-11 13:36:36.104264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.333 #43 NEW cov: 12112 ft: 14705 corp: 21/368b lim: 25 exec/s: 43 rss: 72Mb L: 12/24 MS: 1 EraseBytes- 00:08:43.333 [2024-06-11 13:36:36.175005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.333 [2024-06-11 13:36:36.175040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.333 [2024-06-11 13:36:36.175122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.333 [2024-06-11 13:36:36.175146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.333 [2024-06-11 13:36:36.175226] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.333 [2024-06-11 13:36:36.175250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.333 #49 NEW cov: 12112 ft: 14775 corp: 22/383b lim: 25 exec/s: 49 rss: 72Mb L: 15/24 MS: 1 EraseBytes- 00:08:43.592 [2024-06-11 13:36:36.265613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.592 [2024-06-11 13:36:36.265651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.592 [2024-06-11 13:36:36.265737] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.592 [2024-06-11 13:36:36.265765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.592 [2024-06-11 13:36:36.265847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.592 [2024-06-11 13:36:36.265868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.592 #50 NEW cov: 12112 ft: 14780 corp: 23/402b lim: 25 exec/s: 50 rss: 72Mb L: 19/24 MS: 1 InsertByte- 00:08:43.592 [2024-06-11 13:36:36.326549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:43.592 [2024-06-11 13:36:36.326587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:43.592 [2024-06-11 13:36:36.326695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:43.592 [2024-06-11 13:36:36.326721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:43.592 [2024-06-11 13:36:36.326806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:43.592 [2024-06-11 13:36:36.326828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:43.592 [2024-06-11 13:36:36.326934] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:43.592 [2024-06-11 13:36:36.326959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:43.592 #51 NEW cov: 12112 ft: 14800 corp: 24/423b lim: 25 exec/s: 25 rss: 72Mb L: 21/24 MS: 1 CopyPart- 00:08:43.592 #51 DONE cov: 12112 ft: 14800 corp: 24/423b lim: 25 exec/s: 25 rss: 72Mb 00:08:43.592 ###### Recommended dictionary. ###### 00:08:43.592 "\000\003\345\337\241\315\271\304" # Uses: 1 00:08:43.592 "\005\000" # Uses: 0 00:08:43.592 "\001\000\000\000\000\000\000\000" # Uses: 1 00:08:43.592 "\347\346)>\340\345\003\000" # Uses: 0 00:08:43.593 ###### End of recommended dictionary. ###### 00:08:43.593 Done 51 runs in 2 second(s) 00:08:43.866 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:08:43.866 13:36:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@34 -- # port=4424 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:08:43.867 13:36:36 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:08:43.867 [2024-06-11 13:36:36.566567] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:43.867 [2024-06-11 13:36:36.566631] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3447568 ] 00:08:43.867 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.867 [2024-06-11 13:36:36.775943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.126 [2024-06-11 13:36:36.860257] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.126 [2024-06-11 13:36:36.924178] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.126 [2024-06-11 13:36:36.940524] tcp.c: 968:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:44.126 INFO: Running with entropic power schedule (0xFF, 100). 00:08:44.126 INFO: Seed: 2453056738 00:08:44.126 INFO: Loaded 1 modules (357443 inline 8-bit counters): 357443 [0x29a090c, 0x29f7d4f), 00:08:44.126 INFO: Loaded 1 PC tables (357443 PCs): 357443 [0x29f7d50,0x2f6c180), 00:08:44.126 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:44.126 INFO: A corpus is not provided, starting from an empty corpus 00:08:44.126 #2 INITED exec/s: 0 rss: 63Mb 00:08:44.126 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:44.126 This may also happen if the target rejected all inputs we tried so far 00:08:44.126 [2024-06-11 13:36:37.011790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.126 [2024-06-11 13:36:37.011836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.126 [2024-06-11 13:36:37.011936] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.126 [2024-06-11 13:36:37.011955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.385 NEW_FUNC[1/688]: 0x4ae920 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:44.385 NEW_FUNC[2/688]: 0x4bf580 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:44.385 #4 NEW cov: 11924 ft: 11902 corp: 2/54b lim: 100 exec/s: 0 rss: 71Mb L: 53/53 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:44.385 [2024-06-11 13:36:37.212509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.385 [2024-06-11 13:36:37.212561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.385 [2024-06-11 13:36:37.212633] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.385 [2024-06-11 13:36:37.212654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.385 #5 NEW cov: 12070 ft: 12437 corp: 3/107b lim: 100 exec/s: 0 rss: 72Mb L: 53/53 MS: 1 CrossOver- 00:08:44.644 [2024-06-11 13:36:37.303108] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.645 [2024-06-11 13:36:37.303146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.645 [2024-06-11 13:36:37.303231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.645 [2024-06-11 13:36:37.303253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.645 [2024-06-11 13:36:37.303318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.645 [2024-06-11 13:36:37.303342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:44.645 #10 NEW cov: 12076 ft: 12981 corp: 4/185b lim: 100 exec/s: 0 rss: 72Mb L: 78/78 MS: 5 CrossOver-EraseBytes-ChangeBit-CopyPart-InsertRepeatedBytes- 00:08:44.645 [2024-06-11 13:36:37.373034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.645 [2024-06-11 13:36:37.373074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.645 [2024-06-11 13:36:37.373151] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.645 [2024-06-11 13:36:37.373173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.645 #11 NEW cov: 12161 ft: 13406 corp: 5/230b lim: 100 exec/s: 0 rss: 72Mb L: 45/78 MS: 1 EraseBytes- 00:08:44.645 [2024-06-11 13:36:37.463022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:722123008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.645 [2024-06-11 13:36:37.463064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.645 #21 NEW cov: 12161 ft: 14280 corp: 6/265b lim: 100 exec/s: 0 rss: 72Mb L: 35/78 MS: 5 InsertByte-CrossOver-CMP-EraseBytes-InsertRepeatedBytes- DE: "\377\377\377\377\377\377\377\000"- 00:08:44.645 [2024-06-11 13:36:37.533692] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:722123008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.645 [2024-06-11 13:36:37.533727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.645 [2024-06-11 13:36:37.533802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069414584320 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.645 [2024-06-11 13:36:37.533825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.904 #22 NEW cov: 12161 ft: 14344 corp: 7/308b lim: 100 exec/s: 0 rss: 72Mb L: 43/78 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\000"- 00:08:44.904 [2024-06-11 13:36:37.624504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18405367248039444479 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.904 [2024-06-11 13:36:37.624541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.904 [2024-06-11 13:36:37.624616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.904 [2024-06-11 13:36:37.624640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.904 [2024-06-11 13:36:37.624725] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.904 [2024-06-11 13:36:37.624747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:44.904 #23 NEW cov: 12161 ft: 14490 corp: 8/386b lim: 100 exec/s: 0 rss: 72Mb L: 78/78 MS: 1 ChangeByte- 00:08:44.904 [2024-06-11 13:36:37.714794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.904 [2024-06-11 13:36:37.714832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:44.904 [2024-06-11 13:36:37.714917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.904 [2024-06-11 13:36:37.714942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:44.904 [2024-06-11 13:36:37.715023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.904 [2024-06-11 13:36:37.715044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:44.904 #24 NEW cov: 12161 ft: 14513 corp: 9/464b lim: 100 exec/s: 0 rss: 72Mb L: 78/78 MS: 1 ShuffleBytes- 00:08:44.904 [2024-06-11 13:36:37.784288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:722123008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:44.904 [2024-06-11 13:36:37.784325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.162 #25 NEW cov: 12161 ft: 14571 corp: 10/499b lim: 100 exec/s: 0 rss: 72Mb L: 35/78 MS: 1 ShuffleBytes- 00:08:45.162 [2024-06-11 13:36:37.854530] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:170592953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.162 [2024-06-11 13:36:37.854571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.162 NEW_FUNC[1/1]: 0x1a71960 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:45.162 #26 NEW cov: 12184 ft: 14640 corp: 11/535b lim: 100 exec/s: 0 rss: 72Mb L: 36/78 MS: 1 CrossOver- 00:08:45.162 [2024-06-11 13:36:37.925736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.162 [2024-06-11 13:36:37.925776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.162 [2024-06-11 13:36:37.925857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.162 [2024-06-11 13:36:37.925882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.162 [2024-06-11 13:36:37.925951] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.162 [2024-06-11 13:36:37.925972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.163 #27 NEW cov: 12184 ft: 14664 corp: 12/613b lim: 100 exec/s: 0 rss: 72Mb L: 78/78 MS: 1 ChangeBit- 00:08:45.163 [2024-06-11 13:36:37.986214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744072530558975 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.163 [2024-06-11 13:36:37.986254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.163 [2024-06-11 13:36:37.986326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.163 [2024-06-11 13:36:37.986352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.163 [2024-06-11 13:36:37.986441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:13382931975044184505 len:47546 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.163 [2024-06-11 13:36:37.986460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.163 #28 NEW cov: 12184 ft: 14683 corp: 13/686b lim: 100 exec/s: 28 rss: 72Mb L: 73/78 MS: 1 InsertRepeatedBytes- 00:08:45.420 [2024-06-11 13:36:38.076723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18405367248039444479 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.420 [2024-06-11 13:36:38.076762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.420 [2024-06-11 13:36:38.076835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.420 [2024-06-11 13:36:38.076858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.420 [2024-06-11 13:36:38.076930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.420 [2024-06-11 13:36:38.076950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.420 #29 NEW cov: 12184 ft: 14731 corp: 14/764b lim: 100 exec/s: 29 rss: 72Mb L: 78/78 MS: 1 ShuffleBytes- 00:08:45.420 [2024-06-11 13:36:38.167194] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:722123008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.420 [2024-06-11 13:36:38.167239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.420 [2024-06-11 13:36:38.167324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069414584320 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.420 [2024-06-11 13:36:38.167350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.420 [2024-06-11 13:36:38.167444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744072535146495 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.421 [2024-06-11 13:36:38.167465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.421 [2024-06-11 13:36:38.167564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.421 [2024-06-11 13:36:38.167585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:45.421 #30 NEW cov: 12184 ft: 15079 corp: 15/849b lim: 100 exec/s: 30 rss: 72Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:08:45.421 [2024-06-11 13:36:38.256762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5017090304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.421 [2024-06-11 13:36:38.256805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.421 [2024-06-11 13:36:38.256889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069414584320 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.421 [2024-06-11 13:36:38.256912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.421 #31 NEW cov: 12184 ft: 15099 corp: 16/892b lim: 100 exec/s: 31 rss: 72Mb L: 43/85 MS: 1 ChangeBinInt- 00:08:45.421 [2024-06-11 13:36:38.327120] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5017090304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.421 [2024-06-11 13:36:38.327158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.421 [2024-06-11 13:36:38.327240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744069414584320 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.421 [2024-06-11 13:36:38.327264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.725 #32 NEW cov: 12184 ft: 15163 corp: 17/951b lim: 100 exec/s: 32 rss: 72Mb L: 59/85 MS: 1 CopyPart- 00:08:45.725 [2024-06-11 13:36:38.417642] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5017090304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.725 [2024-06-11 13:36:38.417686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.725 [2024-06-11 13:36:38.417778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:4294901760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.725 [2024-06-11 13:36:38.417803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.725 #33 NEW cov: 12184 ft: 15186 corp: 18/1002b lim: 100 exec/s: 33 rss: 72Mb L: 51/85 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\000"- 00:08:45.725 [2024-06-11 13:36:38.478328] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.725 [2024-06-11 13:36:38.478367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.726 [2024-06-11 13:36:38.478442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.726 [2024-06-11 13:36:38.478468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.726 [2024-06-11 13:36:38.478533] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.726 [2024-06-11 13:36:38.478559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.726 #34 NEW cov: 12184 ft: 15247 corp: 19/1080b lim: 100 exec/s: 34 rss: 73Mb L: 78/85 MS: 1 ChangeBinInt- 00:08:45.726 [2024-06-11 13:36:38.568717] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18405367248039444479 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.726 [2024-06-11 13:36:38.568753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.726 [2024-06-11 13:36:38.568835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.726 [2024-06-11 13:36:38.568856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.726 [2024-06-11 13:36:38.568949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.726 [2024-06-11 13:36:38.568973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.726 #35 NEW cov: 12184 ft: 15272 corp: 20/1158b lim: 100 exec/s: 35 rss: 73Mb L: 78/85 MS: 1 ChangeByte- 00:08:45.726 [2024-06-11 13:36:38.629041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.726 [2024-06-11 13:36:38.629079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.726 [2024-06-11 13:36:38.629163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.726 [2024-06-11 13:36:38.629185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.726 [2024-06-11 13:36:38.629283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.726 [2024-06-11 13:36:38.629303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.985 #36 NEW cov: 12184 ft: 15274 corp: 21/1236b lim: 100 exec/s: 36 rss: 73Mb L: 78/85 MS: 1 CopyPart- 00:08:45.985 [2024-06-11 13:36:38.689350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:722099385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.985 [2024-06-11 13:36:38.689391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.985 [2024-06-11 13:36:38.689470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:72057589742960640 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.985 [2024-06-11 13:36:38.689494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.985 [2024-06-11 13:36:38.689566] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:186 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.985 [2024-06-11 13:36:38.689588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.985 #37 NEW cov: 12184 ft: 15287 corp: 22/1296b lim: 100 exec/s: 37 rss: 73Mb L: 60/85 MS: 1 InsertByte- 00:08:45.985 [2024-06-11 13:36:38.780020] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18405367248039444479 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.985 [2024-06-11 13:36:38.780061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.985 [2024-06-11 13:36:38.780144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.985 [2024-06-11 13:36:38.780167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.986 [2024-06-11 13:36:38.780253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.986 [2024-06-11 13:36:38.780276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:45.986 [2024-06-11 13:36:38.780377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.986 [2024-06-11 13:36:38.780400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:45.986 #38 NEW cov: 12184 ft: 15299 corp: 23/1382b lim: 100 exec/s: 38 rss: 73Mb L: 86/86 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\000"- 00:08:45.986 [2024-06-11 13:36:38.849921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744069615910911 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.986 [2024-06-11 13:36:38.849960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:45.986 [2024-06-11 13:36:38.850038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.986 [2024-06-11 13:36:38.850063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:45.986 [2024-06-11 13:36:38.850145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:45.986 [2024-06-11 13:36:38.850165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:46.245 #39 NEW cov: 12184 ft: 15318 corp: 24/1460b lim: 100 exec/s: 39 rss: 73Mb L: 78/86 MS: 1 ShuffleBytes- 00:08:46.245 [2024-06-11 13:36:38.939438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:5017090304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:46.245 [2024-06-11 13:36:38.939479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:46.245 #40 NEW cov: 12184 ft: 15324 corp: 25/1493b lim: 100 exec/s: 20 rss: 73Mb L: 33/86 MS: 1 EraseBytes- 00:08:46.245 #40 DONE cov: 12184 ft: 15324 corp: 25/1493b lim: 100 exec/s: 20 rss: 73Mb 00:08:46.246 ###### Recommended dictionary. ###### 00:08:46.246 "\377\377\377\377\377\377\377\000" # Uses: 3 00:08:46.246 ###### End of recommended dictionary. ###### 00:08:46.246 Done 40 runs in 2 second(s) 00:08:46.246 13:36:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:08:46.246 13:36:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:46.246 13:36:39 llvm_fuzz.nvmf_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:46.246 13:36:39 llvm_fuzz.nvmf_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:08:46.246 00:08:46.246 real 1m7.143s 00:08:46.246 user 1m46.063s 00:08:46.246 sys 0m8.119s 00:08:46.246 13:36:39 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:08:46.246 13:36:39 llvm_fuzz.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:46.246 ************************************ 00:08:46.246 END TEST nvmf_fuzz 00:08:46.246 ************************************ 00:08:46.507 13:36:39 llvm_fuzz -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:46.507 13:36:39 llvm_fuzz -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:46.507 13:36:39 llvm_fuzz -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:46.507 13:36:39 llvm_fuzz -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:08:46.507 13:36:39 llvm_fuzz -- common/autotest_common.sh@1106 -- # xtrace_disable 00:08:46.507 13:36:39 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:08:46.507 ************************************ 00:08:46.507 START TEST vfio_fuzz 00:08:46.507 ************************************ 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:46.507 * Looking for test storage... 00:08:46.507 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@34 -- # set -e 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:46.507 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:46.508 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:46.508 #define SPDK_CONFIG_H 00:08:46.508 #define SPDK_CONFIG_APPS 1 00:08:46.508 #define SPDK_CONFIG_ARCH native 00:08:46.508 #undef SPDK_CONFIG_ASAN 00:08:46.508 #undef SPDK_CONFIG_AVAHI 00:08:46.508 #undef SPDK_CONFIG_CET 00:08:46.508 #define SPDK_CONFIG_COVERAGE 1 00:08:46.508 #define SPDK_CONFIG_CROSS_PREFIX 00:08:46.508 #undef SPDK_CONFIG_CRYPTO 00:08:46.508 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:46.508 #undef SPDK_CONFIG_CUSTOMOCF 00:08:46.508 #undef SPDK_CONFIG_DAOS 00:08:46.508 #define SPDK_CONFIG_DAOS_DIR 00:08:46.508 #define SPDK_CONFIG_DEBUG 1 00:08:46.508 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:46.508 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:46.508 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:46.508 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:46.508 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:46.508 #undef SPDK_CONFIG_DPDK_UADK 00:08:46.508 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:46.508 #define SPDK_CONFIG_EXAMPLES 1 00:08:46.508 #undef SPDK_CONFIG_FC 00:08:46.508 #define SPDK_CONFIG_FC_PATH 00:08:46.508 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:46.508 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:46.508 #undef SPDK_CONFIG_FUSE 00:08:46.508 #define SPDK_CONFIG_FUZZER 1 00:08:46.508 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:46.508 #undef SPDK_CONFIG_GOLANG 00:08:46.508 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:46.508 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:46.508 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:46.508 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:46.508 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:46.508 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:46.508 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:46.508 #define SPDK_CONFIG_IDXD 1 00:08:46.508 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:46.508 #undef SPDK_CONFIG_IPSEC_MB 00:08:46.508 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:46.508 #define SPDK_CONFIG_ISAL 1 00:08:46.508 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:46.508 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:46.508 #define SPDK_CONFIG_LIBDIR 00:08:46.508 #undef SPDK_CONFIG_LTO 00:08:46.508 #define SPDK_CONFIG_MAX_LCORES 00:08:46.508 #define SPDK_CONFIG_NVME_CUSE 1 00:08:46.508 #undef SPDK_CONFIG_OCF 00:08:46.508 #define SPDK_CONFIG_OCF_PATH 00:08:46.508 #define SPDK_CONFIG_OPENSSL_PATH 00:08:46.508 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:46.508 #define SPDK_CONFIG_PGO_DIR 00:08:46.508 #undef SPDK_CONFIG_PGO_USE 00:08:46.508 #define SPDK_CONFIG_PREFIX /usr/local 00:08:46.508 #undef SPDK_CONFIG_RAID5F 00:08:46.508 #undef SPDK_CONFIG_RBD 00:08:46.508 #define SPDK_CONFIG_RDMA 1 00:08:46.508 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:46.508 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:46.508 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:46.508 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:46.508 #undef SPDK_CONFIG_SHARED 00:08:46.508 #undef SPDK_CONFIG_SMA 00:08:46.508 #define SPDK_CONFIG_TESTS 1 00:08:46.508 #undef SPDK_CONFIG_TSAN 00:08:46.508 #define SPDK_CONFIG_UBLK 1 00:08:46.508 #define SPDK_CONFIG_UBSAN 1 00:08:46.508 #undef SPDK_CONFIG_UNIT_TESTS 00:08:46.508 #undef SPDK_CONFIG_URING 00:08:46.508 #define SPDK_CONFIG_URING_PATH 00:08:46.508 #undef SPDK_CONFIG_URING_ZNS 00:08:46.508 #undef SPDK_CONFIG_USDT 00:08:46.508 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:46.508 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:46.508 #define SPDK_CONFIG_VFIO_USER 1 00:08:46.508 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:46.508 #define SPDK_CONFIG_VHOST 1 00:08:46.508 #define SPDK_CONFIG_VIRTIO 1 00:08:46.508 #undef SPDK_CONFIG_VTUNE 00:08:46.508 #define SPDK_CONFIG_VTUNE_DIR 00:08:46.508 #define SPDK_CONFIG_WERROR 1 00:08:46.508 #define SPDK_CONFIG_WPDK_DIR 00:08:46.508 #undef SPDK_CONFIG_XNVME 00:08:46.509 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- paths/export.sh@5 -- # export PATH 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # uname -s 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@68 -- # PM_OS=Linux 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[0]= 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@58 -- # : 1 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@62 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@64 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@66 -- # : 1 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@68 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@70 -- # : 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@72 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@74 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@76 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@78 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@80 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@82 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@84 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@86 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@88 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@90 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@92 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@94 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@96 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@98 -- # : 1 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@100 -- # : 1 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@104 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@106 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@108 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@110 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@112 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@114 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@116 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@118 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@120 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@122 -- # : 1 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@124 -- # : 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@126 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@128 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@130 -- # : 0 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:46.509 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@132 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@134 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@136 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@138 -- # : 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@140 -- # : true 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@142 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@144 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@146 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@148 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@150 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@152 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@154 -- # : 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@156 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@158 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@160 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@162 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@164 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@167 -- # : 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@169 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@171 -- # : 0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@200 -- # cat 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # export valgrind= 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@263 -- # valgrind= 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # uname -s 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@279 -- # MAKE=make 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j88 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:46.510 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # [[ -z 3448036 ]] 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@318 -- # kill -0 3448036 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.0EcQjo 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.0EcQjo/tests/vfio /tmp/spdk.0EcQjo 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # df -T 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=1050284032 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4234145792 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=82731921408 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=94507954176 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=11776032768 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47249264640 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47253975040 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=18895683584 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=18901594112 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=5910528 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=47253217280 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=47253979136 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=761856 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # avails["$mount"]=9450790912 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@362 -- # sizes["$mount"]=9450795008 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:46.511 * Looking for test storage... 00:08:46.511 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@372 -- # mount=/ 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@374 -- # target_space=82731921408 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@381 -- # new_size=13990625280 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:46.770 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@389 -- # return 0 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1681 -- # set -o errtrace 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1686 -- # true 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1688 -- # xtrace_fd 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@27 -- # exec 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@29 -- # exec 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@18 -- # set -x 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- ../common.sh@8 -- # pids=() 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- ../common.sh@70 -- # local time=1 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:46.770 13:36:39 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:46.771 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:46.771 13:36:39 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:46.771 [2024-06-11 13:36:39.479261] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:46.771 [2024-06-11 13:36:39.479327] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3448079 ] 00:08:46.771 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.771 [2024-06-11 13:36:39.564894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.771 [2024-06-11 13:36:39.659496] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.029 INFO: Running with entropic power schedule (0xFF, 100). 00:08:47.029 INFO: Seed: 1067107897 00:08:47.029 INFO: Loaded 1 modules (354679 inline 8-bit counters): 354679 [0x296210c, 0x29b8a83), 00:08:47.029 INFO: Loaded 1 PC tables (354679 PCs): 354679 [0x29b8a88,0x2f221f8), 00:08:47.029 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:47.029 INFO: A corpus is not provided, starting from an empty corpus 00:08:47.029 #2 INITED exec/s: 0 rss: 66Mb 00:08:47.030 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:47.030 This may also happen if the target rejected all inputs we tried so far 00:08:47.030 [2024-06-11 13:36:39.933900] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:08:47.546 NEW_FUNC[1/646]: 0x4828a0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:08:47.546 NEW_FUNC[2/646]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:47.546 #13 NEW cov: 10918 ft: 10841 corp: 2/7b lim: 6 exec/s: 0 rss: 71Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:08:47.805 #19 NEW cov: 10935 ft: 13712 corp: 3/13b lim: 6 exec/s: 0 rss: 72Mb L: 6/6 MS: 1 ChangeByte- 00:08:47.805 NEW_FUNC[1/1]: 0x1a3de90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:47.805 #20 NEW cov: 10952 ft: 14883 corp: 4/19b lim: 6 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 CrossOver- 00:08:48.063 #21 NEW cov: 10952 ft: 15816 corp: 5/25b lim: 6 exec/s: 21 rss: 73Mb L: 6/6 MS: 1 CrossOver- 00:08:48.322 #22 NEW cov: 10952 ft: 15842 corp: 6/31b lim: 6 exec/s: 22 rss: 73Mb L: 6/6 MS: 1 CopyPart- 00:08:48.581 #26 NEW cov: 10952 ft: 16179 corp: 7/37b lim: 6 exec/s: 26 rss: 73Mb L: 6/6 MS: 4 ChangeByte-InsertRepeatedBytes-ChangeBit-InsertByte- 00:08:48.840 #27 NEW cov: 10952 ft: 16218 corp: 8/43b lim: 6 exec/s: 27 rss: 73Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:48.840 #28 NEW cov: 10959 ft: 16317 corp: 9/49b lim: 6 exec/s: 28 rss: 73Mb L: 6/6 MS: 1 ChangeByte- 00:08:49.099 #29 NEW cov: 10959 ft: 16393 corp: 10/55b lim: 6 exec/s: 14 rss: 73Mb L: 6/6 MS: 1 ChangeBinInt- 00:08:49.099 #29 DONE cov: 10959 ft: 16393 corp: 10/55b lim: 6 exec/s: 14 rss: 73Mb 00:08:49.099 Done 29 runs in 2 second(s) 00:08:49.099 [2024-06-11 13:36:41.905439] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:49.358 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:49.358 13:36:42 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:49.358 [2024-06-11 13:36:42.238371] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:49.358 [2024-06-11 13:36:42.238439] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3448519 ] 00:08:49.616 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.616 [2024-06-11 13:36:42.325330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.616 [2024-06-11 13:36:42.422940] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.875 INFO: Running with entropic power schedule (0xFF, 100). 00:08:49.875 INFO: Seed: 3836095936 00:08:49.875 INFO: Loaded 1 modules (354679 inline 8-bit counters): 354679 [0x296210c, 0x29b8a83), 00:08:49.875 INFO: Loaded 1 PC tables (354679 PCs): 354679 [0x29b8a88,0x2f221f8), 00:08:49.875 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:49.875 INFO: A corpus is not provided, starting from an empty corpus 00:08:49.875 #2 INITED exec/s: 0 rss: 66Mb 00:08:49.875 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:49.875 This may also happen if the target rejected all inputs we tried so far 00:08:49.875 [2024-06-11 13:36:42.702440] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:08:49.875 [2024-06-11 13:36:42.778197] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:49.875 [2024-06-11 13:36:42.778241] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:49.875 [2024-06-11 13:36:42.778263] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:50.392 NEW_FUNC[1/648]: 0x482e40 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:08:50.392 NEW_FUNC[2/648]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:50.392 #15 NEW cov: 10914 ft: 10617 corp: 2/5b lim: 4 exec/s: 0 rss: 71Mb L: 4/4 MS: 3 ShuffleBytes-InsertByte-CopyPart- 00:08:50.392 [2024-06-11 13:36:43.162996] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:50.392 [2024-06-11 13:36:43.163032] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:50.392 [2024-06-11 13:36:43.163053] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:50.651 #26 NEW cov: 10928 ft: 13611 corp: 3/9b lim: 4 exec/s: 0 rss: 72Mb L: 4/4 MS: 1 ChangeByte- 00:08:50.651 [2024-06-11 13:36:43.426395] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:50.651 [2024-06-11 13:36:43.426424] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:50.651 [2024-06-11 13:36:43.426444] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:50.909 NEW_FUNC[1/1]: 0x1a3de90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:50.909 #27 NEW cov: 10948 ft: 13836 corp: 4/13b lim: 4 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 ChangeBit- 00:08:50.909 [2024-06-11 13:36:43.688334] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:50.909 [2024-06-11 13:36:43.688363] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:50.909 [2024-06-11 13:36:43.688383] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:51.168 #28 NEW cov: 10948 ft: 15209 corp: 5/17b lim: 4 exec/s: 28 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:08:51.168 [2024-06-11 13:36:43.967832] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:51.168 [2024-06-11 13:36:43.967862] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:51.168 [2024-06-11 13:36:43.967883] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:51.426 #29 NEW cov: 10948 ft: 15550 corp: 6/21b lim: 4 exec/s: 29 rss: 73Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:51.426 [2024-06-11 13:36:44.219034] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:51.426 [2024-06-11 13:36:44.219063] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:51.426 [2024-06-11 13:36:44.219083] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:51.684 #30 NEW cov: 10948 ft: 15936 corp: 7/25b lim: 4 exec/s: 30 rss: 73Mb L: 4/4 MS: 1 ChangeBit- 00:08:51.684 [2024-06-11 13:36:44.469356] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:51.684 [2024-06-11 13:36:44.469388] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:51.684 [2024-06-11 13:36:44.469410] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:51.943 #33 NEW cov: 10955 ft: 16157 corp: 8/29b lim: 4 exec/s: 33 rss: 74Mb L: 4/4 MS: 3 ShuffleBytes-CrossOver-CrossOver- 00:08:51.943 [2024-06-11 13:36:44.721820] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:51.943 [2024-06-11 13:36:44.721848] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:51.943 [2024-06-11 13:36:44.721873] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:52.201 #34 NEW cov: 10955 ft: 16378 corp: 9/33b lim: 4 exec/s: 17 rss: 74Mb L: 4/4 MS: 1 ChangeBinInt- 00:08:52.202 #34 DONE cov: 10955 ft: 16378 corp: 9/33b lim: 4 exec/s: 17 rss: 74Mb 00:08:52.202 Done 34 runs in 2 second(s) 00:08:52.202 [2024-06-11 13:36:44.897450] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:52.460 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:52.460 13:36:45 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:52.460 [2024-06-11 13:36:45.261652] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:52.460 [2024-06-11 13:36:45.261728] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449099 ] 00:08:52.460 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.460 [2024-06-11 13:36:45.347774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.719 [2024-06-11 13:36:45.448206] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.989 INFO: Running with entropic power schedule (0xFF, 100). 00:08:52.989 INFO: Seed: 2565128082 00:08:52.989 INFO: Loaded 1 modules (354679 inline 8-bit counters): 354679 [0x296210c, 0x29b8a83), 00:08:52.989 INFO: Loaded 1 PC tables (354679 PCs): 354679 [0x29b8a88,0x2f221f8), 00:08:52.989 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:52.989 INFO: A corpus is not provided, starting from an empty corpus 00:08:52.989 #2 INITED exec/s: 0 rss: 64Mb 00:08:52.989 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:52.989 This may also happen if the target rejected all inputs we tried so far 00:08:52.989 [2024-06-11 13:36:45.721460] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:08:52.989 [2024-06-11 13:36:45.809324] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:53.303 NEW_FUNC[1/646]: 0x483820 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:08:53.303 NEW_FUNC[2/646]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:53.304 #20 NEW cov: 10882 ft: 10536 corp: 2/9b lim: 8 exec/s: 0 rss: 71Mb L: 8/8 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:08:53.304 [2024-06-11 13:36:46.190281] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:53.562 NEW_FUNC[1/1]: 0xf38010 in spdk_ring_dequeue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:416 00:08:53.562 #21 NEW cov: 10914 ft: 12934 corp: 3/17b lim: 8 exec/s: 0 rss: 72Mb L: 8/8 MS: 1 ChangeBinInt- 00:08:53.562 [2024-06-11 13:36:46.466145] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:53.820 NEW_FUNC[1/1]: 0x1a3de90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:53.820 #22 NEW cov: 10931 ft: 14543 corp: 4/25b lim: 8 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:08:54.078 [2024-06-11 13:36:46.743434] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:54.078 #23 NEW cov: 10931 ft: 14705 corp: 5/33b lim: 8 exec/s: 23 rss: 73Mb L: 8/8 MS: 1 CrossOver- 00:08:54.336 [2024-06-11 13:36:47.003092] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:54.337 #24 NEW cov: 10931 ft: 14907 corp: 6/41b lim: 8 exec/s: 24 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:08:54.595 [2024-06-11 13:36:47.266226] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:54.595 #25 NEW cov: 10931 ft: 15276 corp: 7/49b lim: 8 exec/s: 25 rss: 73Mb L: 8/8 MS: 1 ChangeBit- 00:08:54.854 [2024-06-11 13:36:47.531390] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:54.854 #26 NEW cov: 10938 ft: 15572 corp: 8/57b lim: 8 exec/s: 13 rss: 73Mb L: 8/8 MS: 1 CMP- DE: "\377\377\037\000\023\216P\000"- 00:08:54.854 #26 DONE cov: 10938 ft: 15572 corp: 8/57b lim: 8 exec/s: 13 rss: 73Mb 00:08:54.854 ###### Recommended dictionary. ###### 00:08:54.854 "\377\377\037\000\023\216P\000" # Uses: 0 00:08:54.854 ###### End of recommended dictionary. ###### 00:08:54.854 Done 26 runs in 2 second(s) 00:08:54.854 [2024-06-11 13:36:47.706444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:55.422 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:55.422 13:36:48 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:55.422 [2024-06-11 13:36:48.071261] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:55.422 [2024-06-11 13:36:48.071352] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3449580 ] 00:08:55.422 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.422 [2024-06-11 13:36:48.158094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.422 [2024-06-11 13:36:48.256260] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.681 INFO: Running with entropic power schedule (0xFF, 100). 00:08:55.681 INFO: Seed: 1078154216 00:08:55.681 INFO: Loaded 1 modules (354679 inline 8-bit counters): 354679 [0x296210c, 0x29b8a83), 00:08:55.681 INFO: Loaded 1 PC tables (354679 PCs): 354679 [0x29b8a88,0x2f221f8), 00:08:55.681 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:55.681 INFO: A corpus is not provided, starting from an empty corpus 00:08:55.681 #2 INITED exec/s: 0 rss: 65Mb 00:08:55.681 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:55.681 This may also happen if the target rejected all inputs we tried so far 00:08:55.681 [2024-06-11 13:36:48.530423] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:08:55.938 NEW_FUNC[1/647]: 0x483f00 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:08:55.938 NEW_FUNC[2/647]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:55.938 #103 NEW cov: 10901 ft: 10688 corp: 2/33b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:08:56.197 #109 NEW cov: 10915 ft: 14254 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 ChangeBit- 00:08:56.455 #110 NEW cov: 10915 ft: 15700 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:56.714 NEW_FUNC[1/1]: 0x1a3de90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:56.714 #111 NEW cov: 10935 ft: 15882 corp: 5/129b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeByte- 00:08:56.972 #112 NEW cov: 10935 ft: 15964 corp: 6/161b lim: 32 exec/s: 112 rss: 74Mb L: 32/32 MS: 1 CMP- DE: "\001\000\000\000\000\000\000q"- 00:08:56.972 #123 NEW cov: 10935 ft: 16481 corp: 7/193b lim: 32 exec/s: 123 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:57.231 #124 NEW cov: 10935 ft: 16542 corp: 8/225b lim: 32 exec/s: 124 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:57.490 #125 NEW cov: 10935 ft: 16629 corp: 9/257b lim: 32 exec/s: 125 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:08:57.749 #126 NEW cov: 10942 ft: 17006 corp: 10/289b lim: 32 exec/s: 126 rss: 74Mb L: 32/32 MS: 1 CrossOver- 00:08:57.749 #127 NEW cov: 10942 ft: 17118 corp: 11/321b lim: 32 exec/s: 63 rss: 74Mb L: 32/32 MS: 1 ChangeBit- 00:08:57.749 #127 DONE cov: 10942 ft: 17118 corp: 11/321b lim: 32 exec/s: 63 rss: 74Mb 00:08:57.749 ###### Recommended dictionary. ###### 00:08:57.749 "\001\000\000\000\000\000\000q" # Uses: 0 00:08:57.749 ###### End of recommended dictionary. ###### 00:08:57.749 Done 127 runs in 2 second(s) 00:08:57.749 [2024-06-11 13:36:50.623460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:58.319 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:08:58.319 13:36:50 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:58.319 [2024-06-11 13:36:50.983204] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:08:58.319 [2024-06-11 13:36:50.983287] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450029 ] 00:08:58.319 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.319 [2024-06-11 13:36:51.069555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.319 [2024-06-11 13:36:51.166317] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.578 INFO: Running with entropic power schedule (0xFF, 100). 00:08:58.578 INFO: Seed: 3984162276 00:08:58.578 INFO: Loaded 1 modules (354679 inline 8-bit counters): 354679 [0x296210c, 0x29b8a83), 00:08:58.578 INFO: Loaded 1 PC tables (354679 PCs): 354679 [0x29b8a88,0x2f221f8), 00:08:58.578 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:58.578 INFO: A corpus is not provided, starting from an empty corpus 00:08:58.578 #2 INITED exec/s: 0 rss: 66Mb 00:08:58.578 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:58.578 This may also happen if the target rejected all inputs we tried so far 00:08:58.578 [2024-06-11 13:36:51.439410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:08:58.837 [2024-06-11 13:36:51.520656] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 434041037028460038 > max 8796093022208 00:08:58.837 [2024-06-11 13:36:51.520691] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x606060625060600, 0xc0c0c0c2b0c0c06) offset=0xa06060606060606 flags=0x3: No space left on device 00:08:58.837 [2024-06-11 13:36:51.520705] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:08:58.837 [2024-06-11 13:36:51.520730] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:58.837 [2024-06-11 13:36:51.521638] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x606060625060600, 0xc0c0c0c2b0c0c06) flags=0: No such file or directory 00:08:58.837 [2024-06-11 13:36:51.521661] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:58.837 [2024-06-11 13:36:51.521680] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:59.097 NEW_FUNC[1/648]: 0x484780 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:08:59.097 NEW_FUNC[2/648]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:59.097 #141 NEW cov: 10914 ft: 10608 corp: 2/33b lim: 32 exec/s: 0 rss: 71Mb L: 32/32 MS: 4 InsertRepeatedBytes-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:08:59.097 [2024-06-11 13:36:51.897455] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 434041037028460038 > max 8796093022208 00:08:59.097 [2024-06-11 13:36:51.897499] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x606060606060600, 0xc0c0c0c0c0c0c06) offset=0xa06060606060606 flags=0x3: No space left on device 00:08:59.097 [2024-06-11 13:36:51.897512] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:08:59.097 [2024-06-11 13:36:51.897534] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:59.097 [2024-06-11 13:36:51.898454] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x606060606060600, 0xc0c0c0c0c0c0c06) flags=0: No such file or directory 00:08:59.097 [2024-06-11 13:36:51.898478] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:59.097 [2024-06-11 13:36:51.898497] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:59.355 #157 NEW cov: 10933 ft: 13334 corp: 3/65b lim: 32 exec/s: 0 rss: 72Mb L: 32/32 MS: 1 CopyPart- 00:08:59.355 [2024-06-11 13:36:52.178071] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 434041037028460038 > max 8796093022208 00:08:59.355 [2024-06-11 13:36:52.178103] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x606060625060600, 0xc0c0c0c2b0c0c06) offset=0xa06060606060606 flags=0x3: No space left on device 00:08:59.355 [2024-06-11 13:36:52.178116] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:08:59.355 [2024-06-11 13:36:52.178137] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:59.355 [2024-06-11 13:36:52.179110] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x606060625060600, 0xc0c0c0c2b0c0c06) flags=0: No such file or directory 00:08:59.355 [2024-06-11 13:36:52.179135] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:59.355 [2024-06-11 13:36:52.179154] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:59.613 NEW_FUNC[1/1]: 0x1a3de90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:08:59.613 #158 NEW cov: 10950 ft: 13946 corp: 4/97b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:08:59.613 [2024-06-11 13:36:52.455492] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 434041037028460038 > max 8796093022208 00:08:59.614 [2024-06-11 13:36:52.455523] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x606060625060600, 0xc0c0c0c2b0c0c06) offset=0xa06060606060606 flags=0x3: No space left on device 00:08:59.614 [2024-06-11 13:36:52.455536] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:08:59.614 [2024-06-11 13:36:52.455555] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:59.614 [2024-06-11 13:36:52.456523] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x606060625060600, 0xc0c0c0c2b0c0c06) flags=0: No such file or directory 00:08:59.614 [2024-06-11 13:36:52.456552] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:59.614 [2024-06-11 13:36:52.456570] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:08:59.871 #159 NEW cov: 10950 ft: 14044 corp: 5/129b lim: 32 exec/s: 159 rss: 73Mb L: 32/32 MS: 1 CopyPart- 00:08:59.871 [2024-06-11 13:36:52.734941] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 434041037028460038 > max 8796093022208 00:08:59.871 [2024-06-11 13:36:52.734972] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x606060625060600, 0xc0c0c0c2b0c0c06) offset=0xa06060606060606 flags=0x3: No space left on device 00:08:59.871 [2024-06-11 13:36:52.734985] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:08:59.871 [2024-06-11 13:36:52.735004] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:08:59.871 [2024-06-11 13:36:52.735939] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x606060625060600, 0xc0c0c0c2b0c0c06) flags=0: No such file or directory 00:08:59.871 [2024-06-11 13:36:52.735963] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:08:59.871 [2024-06-11 13:36:52.735983] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:09:00.130 #170 NEW cov: 10950 ft: 14077 corp: 6/161b lim: 32 exec/s: 170 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:00.130 [2024-06-11 13:36:53.013336] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 434041037028460038 > max 8796093022208 00:09:00.130 [2024-06-11 13:36:53.013366] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x606060600060606, 0xc0c0c0c060c0c0c) offset=0xa06060606060606 flags=0x3: No space left on device 00:09:00.130 [2024-06-11 13:36:53.013380] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:09:00.130 [2024-06-11 13:36:53.013399] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:09:00.130 [2024-06-11 13:36:53.014387] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x606060600060606, 0xc0c0c0c060c0c0c) flags=0: No such file or directory 00:09:00.130 [2024-06-11 13:36:53.014412] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:09:00.130 [2024-06-11 13:36:53.014431] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:09:00.388 #171 NEW cov: 10957 ft: 14226 corp: 7/193b lim: 32 exec/s: 171 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:09:00.389 [2024-06-11 13:36:53.293340] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 434041037028460038 > max 8796093022208 00:09:00.389 [2024-06-11 13:36:53.293372] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x606250606000000, 0xc0c2b0c0c060606) offset=0x606060606060606 flags=0x3: No space left on device 00:09:00.389 [2024-06-11 13:36:53.293384] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:09:00.389 [2024-06-11 13:36:53.293404] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:09:00.389 [2024-06-11 13:36:53.294362] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x606250606000000, 0xc0c2b0c0c060606) flags=0: No such file or directory 00:09:00.389 [2024-06-11 13:36:53.294387] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:09:00.389 [2024-06-11 13:36:53.294409] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:09:00.647 #172 NEW cov: 10957 ft: 14368 corp: 8/225b lim: 32 exec/s: 86 rss: 74Mb L: 32/32 MS: 1 CopyPart- 00:09:00.647 #172 DONE cov: 10957 ft: 14368 corp: 8/225b lim: 32 exec/s: 86 rss: 74Mb 00:09:00.647 Done 172 runs in 2 second(s) 00:09:00.647 [2024-06-11 13:36:53.489437] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:09:00.907 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:00.907 13:36:53 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:09:01.167 [2024-06-11 13:36:53.819352] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:01.167 [2024-06-11 13:36:53.819432] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450466 ] 00:09:01.167 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.167 [2024-06-11 13:36:53.907593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.167 [2024-06-11 13:36:54.007364] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.426 INFO: Running with entropic power schedule (0xFF, 100). 00:09:01.426 INFO: Seed: 2537192358 00:09:01.426 INFO: Loaded 1 modules (354679 inline 8-bit counters): 354679 [0x296210c, 0x29b8a83), 00:09:01.426 INFO: Loaded 1 PC tables (354679 PCs): 354679 [0x29b8a88,0x2f221f8), 00:09:01.426 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:09:01.426 INFO: A corpus is not provided, starting from an empty corpus 00:09:01.426 #2 INITED exec/s: 0 rss: 65Mb 00:09:01.426 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:01.426 This may also happen if the target rejected all inputs we tried so far 00:09:01.426 [2024-06-11 13:36:54.283505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:09:01.685 [2024-06-11 13:36:54.357773] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:01.685 [2024-06-11 13:36:54.357818] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:01.943 NEW_FUNC[1/648]: 0x485180 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:09:01.943 NEW_FUNC[2/648]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:01.943 #23 NEW cov: 10914 ft: 10603 corp: 2/14b lim: 13 exec/s: 0 rss: 72Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:09:01.944 [2024-06-11 13:36:54.718980] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:01.944 [2024-06-11 13:36:54.719036] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.202 #24 NEW cov: 10933 ft: 13078 corp: 3/27b lim: 13 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:09:02.202 [2024-06-11 13:36:54.979944] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.202 [2024-06-11 13:36:54.979985] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.462 NEW_FUNC[1/1]: 0x1a3de90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:09:02.462 #25 NEW cov: 10950 ft: 14319 corp: 4/40b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:09:02.462 [2024-06-11 13:36:55.251350] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.462 [2024-06-11 13:36:55.251390] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.720 #38 NEW cov: 10950 ft: 15844 corp: 5/53b lim: 13 exec/s: 38 rss: 74Mb L: 13/13 MS: 3 CrossOver-ShuffleBytes-InsertByte- 00:09:02.720 [2024-06-11 13:36:55.524193] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.720 [2024-06-11 13:36:55.524239] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:02.979 #39 NEW cov: 10950 ft: 15928 corp: 6/66b lim: 13 exec/s: 39 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:09:02.979 [2024-06-11 13:36:55.787838] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:02.979 [2024-06-11 13:36:55.787877] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:03.238 #40 NEW cov: 10950 ft: 16389 corp: 7/79b lim: 13 exec/s: 40 rss: 74Mb L: 13/13 MS: 1 ChangeBit- 00:09:03.238 [2024-06-11 13:36:56.049468] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:03.238 [2024-06-11 13:36:56.049508] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:03.497 #46 NEW cov: 10957 ft: 16520 corp: 8/92b lim: 13 exec/s: 46 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:09:03.497 [2024-06-11 13:36:56.314055] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:03.497 [2024-06-11 13:36:56.314094] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:03.756 #47 NEW cov: 10957 ft: 16588 corp: 9/105b lim: 13 exec/s: 23 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:09:03.756 #47 DONE cov: 10957 ft: 16588 corp: 9/105b lim: 13 exec/s: 23 rss: 74Mb 00:09:03.756 Done 47 runs in 2 second(s) 00:09:03.756 [2024-06-11 13:36:56.498436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@23 -- # local timen=1 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:09:04.015 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:04.015 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:04.016 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:09:04.016 13:36:56 llvm_fuzz.vfio_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:09:04.016 [2024-06-11 13:36:56.857893] Starting SPDK v24.09-pre git sha1 9ccef4907 / DPDK 24.03.0 initialization... 00:09:04.016 [2024-06-11 13:36:56.857978] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450905 ] 00:09:04.016 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.275 [2024-06-11 13:36:56.944524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.275 [2024-06-11 13:36:57.044202] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.534 INFO: Running with entropic power schedule (0xFF, 100). 00:09:04.534 INFO: Seed: 1283253649 00:09:04.534 INFO: Loaded 1 modules (354679 inline 8-bit counters): 354679 [0x296210c, 0x29b8a83), 00:09:04.534 INFO: Loaded 1 PC tables (354679 PCs): 354679 [0x29b8a88,0x2f221f8), 00:09:04.534 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:09:04.534 INFO: A corpus is not provided, starting from an empty corpus 00:09:04.534 #2 INITED exec/s: 0 rss: 66Mb 00:09:04.534 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:04.534 This may also happen if the target rejected all inputs we tried so far 00:09:04.534 [2024-06-11 13:36:57.330140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:09:04.793 [2024-06-11 13:36:57.511368] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:04.793 [2024-06-11 13:36:57.511423] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:05.052 NEW_FUNC[1/648]: 0x485e70 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:09:05.052 NEW_FUNC[2/648]: 0x4883b0 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:05.052 #19 NEW cov: 10911 ft: 10880 corp: 2/10b lim: 9 exec/s: 0 rss: 71Mb L: 9/9 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:05.052 [2024-06-11 13:36:57.932780] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:05.052 [2024-06-11 13:36:57.932827] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:05.310 NEW_FUNC[1/1]: 0x1a3de90 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:601 00:09:05.310 #20 NEW cov: 10942 ft: 14056 corp: 3/19b lim: 9 exec/s: 0 rss: 72Mb L: 9/9 MS: 1 ChangeBit- 00:09:05.310 [2024-06-11 13:36:58.203206] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:05.310 [2024-06-11 13:36:58.203246] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:05.569 #21 NEW cov: 10942 ft: 14344 corp: 4/28b lim: 9 exec/s: 21 rss: 73Mb L: 9/9 MS: 1 CrossOver- 00:09:05.569 [2024-06-11 13:36:58.462223] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:05.569 [2024-06-11 13:36:58.462263] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:05.828 #22 NEW cov: 10942 ft: 14650 corp: 5/37b lim: 9 exec/s: 22 rss: 73Mb L: 9/9 MS: 1 CopyPart- 00:09:05.828 [2024-06-11 13:36:58.721888] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:05.828 [2024-06-11 13:36:58.721927] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:06.087 #25 NEW cov: 10942 ft: 15514 corp: 6/46b lim: 9 exec/s: 25 rss: 73Mb L: 9/9 MS: 3 ShuffleBytes-CrossOver-InsertByte- 00:09:06.346 [2024-06-11 13:36:59.001875] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:06.346 [2024-06-11 13:36:59.001915] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:06.347 #26 NEW cov: 10949 ft: 15547 corp: 7/55b lim: 9 exec/s: 26 rss: 73Mb L: 9/9 MS: 1 ChangeBinInt- 00:09:06.605 [2024-06-11 13:36:59.269519] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:09:06.605 [2024-06-11 13:36:59.269560] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:09:06.605 #27 NEW cov: 10949 ft: 15597 corp: 8/64b lim: 9 exec/s: 13 rss: 73Mb L: 9/9 MS: 1 ChangeBit- 00:09:06.605 #27 DONE cov: 10949 ft: 15597 corp: 8/64b lim: 9 exec/s: 13 rss: 73Mb 00:09:06.605 Done 27 runs in 2 second(s) 00:09:06.605 [2024-06-11 13:36:59.450456] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:09:06.864 13:36:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:09:06.864 13:36:59 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:06.864 13:36:59 llvm_fuzz.vfio_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:06.864 13:36:59 llvm_fuzz.vfio_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:09:06.864 00:09:06.864 real 0m20.580s 00:09:06.864 user 0m30.048s 00:09:06.864 sys 0m1.939s 00:09:06.864 13:36:59 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:06.864 13:36:59 llvm_fuzz.vfio_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:06.864 ************************************ 00:09:06.864 END TEST vfio_fuzz 00:09:06.864 ************************************ 00:09:07.124 13:36:59 llvm_fuzz -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:09:07.124 00:09:07.124 real 1m27.945s 00:09:07.124 user 2m16.204s 00:09:07.124 sys 0m10.204s 00:09:07.124 13:36:59 llvm_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:09:07.124 13:36:59 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:07.124 ************************************ 00:09:07.124 END TEST llvm_fuzz 00:09:07.124 ************************************ 00:09:07.124 13:36:59 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:09:07.124 13:36:59 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:09:07.124 13:36:59 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:09:07.124 13:36:59 -- common/autotest_common.sh@723 -- # xtrace_disable 00:09:07.124 13:36:59 -- common/autotest_common.sh@10 -- # set +x 00:09:07.124 13:36:59 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:09:07.124 13:36:59 -- common/autotest_common.sh@1391 -- # local autotest_es=0 00:09:07.124 13:36:59 -- common/autotest_common.sh@1392 -- # xtrace_disable 00:09:07.124 13:36:59 -- common/autotest_common.sh@10 -- # set +x 00:09:11.317 INFO: APP EXITING 00:09:11.317 INFO: killing all VMs 00:09:11.317 INFO: killing vhost app 00:09:11.317 WARN: no vhost pid file found 00:09:11.317 INFO: EXIT DONE 00:09:14.605 Waiting for block devices as requested 00:09:14.605 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:09:14.605 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:09:14.605 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:14.605 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:14.605 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:14.605 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:14.605 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:14.605 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:14.605 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:14.864 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:14.864 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:14.864 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:15.123 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:15.123 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:15.123 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:15.123 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:15.383 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:15.383 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:15.383 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:09:18.673 Cleaning 00:09:18.673 Removing: /dev/shm/spdk_tgt_trace.pid3418086 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3414619 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3416667 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3418086 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3418678 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3419562 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3419780 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3420687 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3420703 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3421057 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3421318 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3421587 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3422044 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3422199 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3422439 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3422657 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3422953 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3423771 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3426612 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3426854 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3427061 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3427174 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3427677 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3427734 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3428198 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3428207 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3428534 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3428671 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3428917 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3429133 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3429652 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3429885 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3430121 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3430200 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3430494 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3430671 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3430738 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3430983 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3431268 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3431559 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3431856 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3432106 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3432346 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3432578 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3432815 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3433047 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3433283 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3433524 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3433753 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3433992 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3434241 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3434569 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3434873 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3435133 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3435365 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3435602 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3435839 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3435962 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3436194 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3436807 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3437245 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3437674 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3438115 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3438506 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3438977 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3439422 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3439854 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3440289 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3440731 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3441168 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3441726 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3442161 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3442599 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3443428 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3443862 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3444297 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3444715 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3445124 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3445479 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3445851 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3446268 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3446704 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3447139 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3447568 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3448079 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3448519 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3449099 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3449580 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3450029 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3450466 00:09:18.673 Removing: /var/run/dpdk/spdk_pid3450905 00:09:18.673 Clean 00:09:18.673 13:37:11 -- common/autotest_common.sh@1450 -- # return 0 00:09:18.673 13:37:11 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:09:18.673 13:37:11 -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:18.673 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:09:18.673 13:37:11 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:09:18.673 13:37:11 -- common/autotest_common.sh@729 -- # xtrace_disable 00:09:18.673 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:09:18.673 13:37:11 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:18.673 13:37:11 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:09:18.673 13:37:11 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:09:18.673 13:37:11 -- spdk/autotest.sh@391 -- # hash lcov 00:09:18.673 13:37:11 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:09:18.673 13:37:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:18.673 13:37:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:18.673 13:37:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.673 13:37:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.673 13:37:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.673 13:37:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.673 13:37:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.673 13:37:11 -- paths/export.sh@5 -- $ export PATH 00:09:18.673 13:37:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.673 13:37:11 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:09:18.673 13:37:11 -- common/autobuild_common.sh@437 -- $ date +%s 00:09:18.673 13:37:11 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1718105831.XXXXXX 00:09:18.673 13:37:11 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1718105831.Dcb8hG 00:09:18.673 13:37:11 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:09:18.673 13:37:11 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:09:18.673 13:37:11 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:09:18.673 13:37:11 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:09:18.673 13:37:11 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:09:18.673 13:37:11 -- common/autobuild_common.sh@453 -- $ get_config_params 00:09:18.673 13:37:11 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:09:18.674 13:37:11 -- common/autotest_common.sh@10 -- $ set +x 00:09:18.674 13:37:11 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:09:18.674 13:37:11 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:09:18.674 13:37:11 -- pm/common@17 -- $ local monitor 00:09:18.674 13:37:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:18.674 13:37:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:18.674 13:37:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:18.674 13:37:11 -- pm/common@21 -- $ date +%s 00:09:18.674 13:37:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:18.674 13:37:11 -- pm/common@21 -- $ date +%s 00:09:18.674 13:37:11 -- pm/common@25 -- $ sleep 1 00:09:18.674 13:37:11 -- pm/common@21 -- $ date +%s 00:09:18.674 13:37:11 -- pm/common@21 -- $ date +%s 00:09:18.674 13:37:11 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718105831 00:09:18.674 13:37:11 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718105831 00:09:18.674 13:37:11 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718105831 00:09:18.674 13:37:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1718105831 00:09:18.674 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718105831_collect-vmstat.pm.log 00:09:18.674 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718105831_collect-cpu-load.pm.log 00:09:18.674 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718105831_collect-cpu-temp.pm.log 00:09:18.674 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1718105831_collect-bmc-pm.bmc.pm.log 00:09:19.611 13:37:12 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:09:19.611 13:37:12 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j88 00:09:19.611 13:37:12 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:19.611 13:37:12 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:09:19.611 13:37:12 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:09:19.611 13:37:12 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:09:19.611 13:37:12 -- spdk/autopackage.sh@19 -- $ timing_finish 00:09:19.611 13:37:12 -- common/autotest_common.sh@735 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:19.611 13:37:12 -- common/autotest_common.sh@736 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:09:19.611 13:37:12 -- common/autotest_common.sh@738 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:19.611 13:37:12 -- spdk/autopackage.sh@20 -- $ exit 0 00:09:19.611 13:37:12 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:09:19.611 13:37:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:19.611 13:37:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:19.611 13:37:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:19.611 13:37:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:09:19.611 13:37:12 -- pm/common@44 -- $ pid=3457138 00:09:19.611 13:37:12 -- pm/common@50 -- $ kill -TERM 3457138 00:09:19.611 13:37:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:19.611 13:37:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:09:19.611 13:37:12 -- pm/common@44 -- $ pid=3457141 00:09:19.611 13:37:12 -- pm/common@50 -- $ kill -TERM 3457141 00:09:19.611 13:37:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:19.611 13:37:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:09:19.611 13:37:12 -- pm/common@44 -- $ pid=3457143 00:09:19.611 13:37:12 -- pm/common@50 -- $ kill -TERM 3457143 00:09:19.611 13:37:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:19.611 13:37:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:09:19.611 13:37:12 -- pm/common@44 -- $ pid=3457178 00:09:19.611 13:37:12 -- pm/common@50 -- $ sudo -E kill -TERM 3457178 00:09:19.611 + [[ -n 3311504 ]] 00:09:19.611 + sudo kill 3311504 00:09:19.880 [Pipeline] } 00:09:19.899 [Pipeline] // stage 00:09:19.905 [Pipeline] } 00:09:19.923 [Pipeline] // timeout 00:09:19.930 [Pipeline] } 00:09:19.948 [Pipeline] // catchError 00:09:19.953 [Pipeline] } 00:09:19.972 [Pipeline] // wrap 00:09:19.978 [Pipeline] } 00:09:19.994 [Pipeline] // catchError 00:09:20.003 [Pipeline] stage 00:09:20.006 [Pipeline] { (Epilogue) 00:09:20.020 [Pipeline] catchError 00:09:20.022 [Pipeline] { 00:09:20.038 [Pipeline] echo 00:09:20.039 Cleanup processes 00:09:20.046 [Pipeline] sh 00:09:20.345 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:20.345 3457320 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:09:20.345 3458036 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:20.392 [Pipeline] sh 00:09:20.676 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:20.676 ++ grep -v 'sudo pgrep' 00:09:20.676 ++ awk '{print $1}' 00:09:20.676 + sudo kill -9 3457320 00:09:20.688 [Pipeline] sh 00:09:20.971 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:22.886 [Pipeline] sh 00:09:23.163 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:23.163 Artifacts sizes are good 00:09:23.177 [Pipeline] archiveArtifacts 00:09:23.184 Archiving artifacts 00:09:23.237 [Pipeline] sh 00:09:23.514 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:23.529 [Pipeline] cleanWs 00:09:23.538 [WS-CLEANUP] Deleting project workspace... 00:09:23.538 [WS-CLEANUP] Deferred wipeout is used... 00:09:23.544 [WS-CLEANUP] done 00:09:23.546 [Pipeline] } 00:09:23.566 [Pipeline] // catchError 00:09:23.578 [Pipeline] sh 00:09:23.859 + logger -p user.info -t JENKINS-CI 00:09:23.868 [Pipeline] } 00:09:23.884 [Pipeline] // stage 00:09:23.890 [Pipeline] } 00:09:23.910 [Pipeline] // node 00:09:23.916 [Pipeline] End of Pipeline 00:09:23.947 Finished: SUCCESS