00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 2030 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3290 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.047 The recommended git tool is: git 00:00:00.047 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.074 Fetching changes from the remote Git repository 00:00:00.077 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.114 Using shallow fetch with depth 1 00:00:00.114 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.114 > git --version # timeout=10 00:00:00.170 > git --version # 'git version 2.39.2' 00:00:00.170 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.214 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.214 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.554 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.563 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.574 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:03.574 > git config core.sparsecheckout # timeout=10 00:00:03.586 > git read-tree -mu HEAD # timeout=10 00:00:03.600 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:03.626 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:03.626 > git rev-list --no-walk 456d80899d5187c68de113852b37bde1201fd33a # timeout=10 00:00:03.724 [Pipeline] Start of Pipeline 00:00:03.738 [Pipeline] library 00:00:03.739 Loading library shm_lib@master 00:00:03.739 Library shm_lib@master is cached. Copying from home. 00:00:03.757 [Pipeline] node 00:00:03.772 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.773 [Pipeline] { 00:00:03.785 [Pipeline] catchError 00:00:03.787 [Pipeline] { 00:00:03.801 [Pipeline] wrap 00:00:03.811 [Pipeline] { 00:00:03.819 [Pipeline] stage 00:00:03.821 [Pipeline] { (Prologue) 00:00:04.024 [Pipeline] sh 00:00:04.308 + logger -p user.info -t JENKINS-CI 00:00:04.324 [Pipeline] echo 00:00:04.326 Node: WFP39 00:00:04.331 [Pipeline] sh 00:00:04.623 [Pipeline] setCustomBuildProperty 00:00:04.631 [Pipeline] echo 00:00:04.632 Cleanup processes 00:00:04.635 [Pipeline] sh 00:00:04.914 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.914 3789715 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.927 [Pipeline] sh 00:00:05.209 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.209 ++ grep -v 'sudo pgrep' 00:00:05.209 ++ awk '{print $1}' 00:00:05.209 + sudo kill -9 00:00:05.209 + true 00:00:05.221 [Pipeline] cleanWs 00:00:05.228 [WS-CLEANUP] Deleting project workspace... 00:00:05.228 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.233 [WS-CLEANUP] done 00:00:05.237 [Pipeline] setCustomBuildProperty 00:00:05.247 [Pipeline] sh 00:00:05.526 + sudo git config --global --replace-all safe.directory '*' 00:00:05.634 [Pipeline] httpRequest 00:00:05.654 [Pipeline] echo 00:00:05.656 Sorcerer 10.211.164.101 is alive 00:00:05.663 [Pipeline] httpRequest 00:00:05.667 HttpMethod: GET 00:00:05.668 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:05.668 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:05.677 Response Code: HTTP/1.1 200 OK 00:00:05.678 Success: Status code 200 is in the accepted range: 200,404 00:00:05.678 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:09.902 [Pipeline] sh 00:00:10.186 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:10.203 [Pipeline] httpRequest 00:00:10.222 [Pipeline] echo 00:00:10.224 Sorcerer 10.211.164.101 is alive 00:00:10.231 [Pipeline] httpRequest 00:00:10.235 HttpMethod: GET 00:00:10.235 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:10.236 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:10.253 Response Code: HTTP/1.1 200 OK 00:00:10.253 Success: Status code 200 is in the accepted range: 200,404 00:00:10.254 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:03.159 [Pipeline] sh 00:01:03.448 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:06.756 [Pipeline] sh 00:01:07.042 + git -C spdk log --oneline -n5 00:01:07.042 dbef7efac test: fix dpdk builds on ubuntu24 00:01:07.042 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:07.042 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:07.042 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:07.042 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:07.054 [Pipeline] } 00:01:07.071 [Pipeline] // stage 00:01:07.082 [Pipeline] stage 00:01:07.084 [Pipeline] { (Prepare) 00:01:07.100 [Pipeline] writeFile 00:01:07.114 [Pipeline] sh 00:01:07.400 + logger -p user.info -t JENKINS-CI 00:01:07.412 [Pipeline] sh 00:01:07.697 + logger -p user.info -t JENKINS-CI 00:01:07.709 [Pipeline] sh 00:01:07.994 + cat autorun-spdk.conf 00:01:07.994 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.994 SPDK_TEST_FUZZER_SHORT=1 00:01:07.994 SPDK_TEST_FUZZER=1 00:01:07.994 SPDK_RUN_UBSAN=1 00:01:08.002 RUN_NIGHTLY=1 00:01:08.007 [Pipeline] readFile 00:01:08.042 [Pipeline] withEnv 00:01:08.046 [Pipeline] { 00:01:08.065 [Pipeline] sh 00:01:08.355 + set -ex 00:01:08.355 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:08.355 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:08.356 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.356 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:08.356 ++ SPDK_TEST_FUZZER=1 00:01:08.356 ++ SPDK_RUN_UBSAN=1 00:01:08.356 ++ RUN_NIGHTLY=1 00:01:08.356 + case $SPDK_TEST_NVMF_NICS in 00:01:08.356 + DRIVERS= 00:01:08.356 + [[ -n '' ]] 00:01:08.356 + exit 0 00:01:08.365 [Pipeline] } 00:01:08.384 [Pipeline] // withEnv 00:01:08.390 [Pipeline] } 00:01:08.407 [Pipeline] // stage 00:01:08.417 [Pipeline] catchError 00:01:08.419 [Pipeline] { 00:01:08.435 [Pipeline] timeout 00:01:08.435 Timeout set to expire in 30 min 00:01:08.437 [Pipeline] { 00:01:08.454 [Pipeline] stage 00:01:08.456 [Pipeline] { (Tests) 00:01:08.473 [Pipeline] sh 00:01:08.761 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:08.761 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:08.761 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:08.761 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:08.761 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:08.761 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:08.761 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:08.761 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:08.761 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:08.761 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:08.761 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:08.761 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:08.761 + source /etc/os-release 00:01:08.761 ++ NAME='Fedora Linux' 00:01:08.761 ++ VERSION='38 (Cloud Edition)' 00:01:08.761 ++ ID=fedora 00:01:08.761 ++ VERSION_ID=38 00:01:08.761 ++ VERSION_CODENAME= 00:01:08.761 ++ PLATFORM_ID=platform:f38 00:01:08.761 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:08.761 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:08.761 ++ LOGO=fedora-logo-icon 00:01:08.761 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:08.761 ++ HOME_URL=https://fedoraproject.org/ 00:01:08.761 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:08.761 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:08.761 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:08.761 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:08.761 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:08.761 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:08.761 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:08.761 ++ SUPPORT_END=2024-05-14 00:01:08.761 ++ VARIANT='Cloud Edition' 00:01:08.761 ++ VARIANT_ID=cloud 00:01:08.761 + uname -a 00:01:08.761 Linux spdk-wfp-39 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 02:47:10 UTC 2024 x86_64 GNU/Linux 00:01:08.761 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:12.058 Hugepages 00:01:12.058 node hugesize free / total 00:01:12.058 node0 1048576kB 0 / 0 00:01:12.058 node0 2048kB 0 / 0 00:01:12.058 node1 1048576kB 0 / 0 00:01:12.318 node1 2048kB 0 / 0 00:01:12.318 00:01:12.318 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:12.318 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:12.318 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:12.318 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:12.318 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:12.318 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:12.318 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:12.318 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:12.318 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:12.318 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:12.318 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:12.318 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:12.318 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:12.318 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:12.318 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:12.318 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:12.318 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:12.318 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:12.318 + rm -f /tmp/spdk-ld-path 00:01:12.318 + source autorun-spdk.conf 00:01:12.318 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.318 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:12.318 ++ SPDK_TEST_FUZZER=1 00:01:12.318 ++ SPDK_RUN_UBSAN=1 00:01:12.318 ++ RUN_NIGHTLY=1 00:01:12.318 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:12.318 + [[ -n '' ]] 00:01:12.318 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:12.318 + for M in /var/spdk/build-*-manifest.txt 00:01:12.318 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:12.318 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:12.318 + for M in /var/spdk/build-*-manifest.txt 00:01:12.318 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:12.319 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:12.319 ++ uname 00:01:12.319 + [[ Linux == \L\i\n\u\x ]] 00:01:12.319 + sudo dmesg -T 00:01:12.583 + sudo dmesg --clear 00:01:12.583 + dmesg_pid=3791265 00:01:12.583 + [[ Fedora Linux == FreeBSD ]] 00:01:12.583 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.583 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.583 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:12.583 + sudo dmesg -Tw 00:01:12.583 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:12.583 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:12.583 + [[ -x /usr/src/fio-static/fio ]] 00:01:12.583 + export FIO_BIN=/usr/src/fio-static/fio 00:01:12.583 + FIO_BIN=/usr/src/fio-static/fio 00:01:12.583 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:12.583 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:12.583 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:12.583 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.583 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.583 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:12.583 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.583 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.583 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:12.583 Test configuration: 00:01:12.583 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.583 SPDK_TEST_FUZZER_SHORT=1 00:01:12.583 SPDK_TEST_FUZZER=1 00:01:12.583 SPDK_RUN_UBSAN=1 00:01:12.583 RUN_NIGHTLY=1 13:43:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:12.583 13:43:03 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:12.583 13:43:03 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:12.583 13:43:03 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:12.583 13:43:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.583 13:43:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.583 13:43:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.583 13:43:03 -- paths/export.sh@5 -- $ export PATH 00:01:12.583 13:43:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.583 13:43:03 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:12.583 13:43:03 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:12.583 13:43:03 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721734983.XXXXXX 00:01:12.583 13:43:03 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721734983.eKU410 00:01:12.583 13:43:03 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:12.583 13:43:03 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:01:12.583 13:43:03 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:12.583 13:43:03 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:12.583 13:43:03 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:12.583 13:43:03 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:12.583 13:43:03 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:12.583 13:43:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.583 13:43:03 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:12.583 13:43:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:12.583 13:43:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:12.583 13:43:03 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:12.583 13:43:03 -- spdk/autobuild.sh@16 -- $ date -u 00:01:12.583 Tue Jul 23 11:43:03 AM UTC 2024 00:01:12.583 13:43:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:12.583 LTS-60-gdbef7efac 00:01:12.583 13:43:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:12.583 13:43:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:12.583 13:43:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:12.583 13:43:03 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:12.583 13:43:03 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:12.583 13:43:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.846 ************************************ 00:01:12.846 START TEST ubsan 00:01:12.846 ************************************ 00:01:12.846 13:43:03 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:12.846 using ubsan 00:01:12.846 00:01:12.846 real 0m0.001s 00:01:12.846 user 0m0.000s 00:01:12.846 sys 0m0.000s 00:01:12.846 13:43:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:12.846 13:43:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.846 ************************************ 00:01:12.846 END TEST ubsan 00:01:12.846 ************************************ 00:01:12.846 13:43:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:12.846 13:43:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:12.846 13:43:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:12.846 13:43:03 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:12.846 13:43:03 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:12.846 13:43:03 -- common/autobuild_common.sh@426 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:12.846 13:43:03 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:12.846 13:43:03 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:12.846 13:43:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.846 ************************************ 00:01:12.846 START TEST autobuild_llvm_precompile 00:01:12.846 ************************************ 00:01:12.846 13:43:03 -- common/autotest_common.sh@1104 -- $ _llvm_precompile 00:01:12.846 13:43:03 -- common/autobuild_common.sh@32 -- $ clang --version 00:01:12.846 13:43:03 -- common/autobuild_common.sh@32 -- $ [[ clang version 16.0.6 (Fedora 16.0.6-3.fc38) 00:01:12.846 Target: x86_64-redhat-linux-gnu 00:01:12.846 Thread model: posix 00:01:12.846 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:12.846 13:43:03 -- common/autobuild_common.sh@33 -- $ clang_num=16 00:01:12.846 13:43:03 -- common/autobuild_common.sh@35 -- $ export CC=clang-16 00:01:12.846 13:43:03 -- common/autobuild_common.sh@35 -- $ CC=clang-16 00:01:12.846 13:43:03 -- common/autobuild_common.sh@36 -- $ export CXX=clang++-16 00:01:12.846 13:43:03 -- common/autobuild_common.sh@36 -- $ CXX=clang++-16 00:01:12.846 13:43:03 -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a) 00:01:12.846 13:43:03 -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:12.846 13:43:03 -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a ]] 00:01:12.846 13:43:03 -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a' 00:01:12.846 13:43:03 -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:13.106 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:13.106 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:13.676 Using 'verbs' RDMA provider 00:01:29.518 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:47.618 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:47.618 Creating mk/config.mk...done. 00:01:47.618 Creating mk/cc.flags.mk...done. 00:01:47.618 Type 'make' to build. 00:01:47.618 00:01:47.618 real 0m32.237s 00:01:47.618 user 0m14.733s 00:01:47.618 sys 0m16.960s 00:01:47.618 13:43:35 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:47.618 13:43:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.618 ************************************ 00:01:47.618 END TEST autobuild_llvm_precompile 00:01:47.618 ************************************ 00:01:47.618 13:43:35 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:47.618 13:43:35 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:47.618 13:43:35 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:47.618 13:43:35 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:47.618 13:43:35 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib64/clang/16/lib/linux/libclang_rt.fuzzer_no_main-x86_64.a 00:01:47.618 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:47.618 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:47.618 Using 'verbs' RDMA provider 00:01:59.832 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:14.779 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:14.779 Creating mk/config.mk...done. 00:02:14.779 Creating mk/cc.flags.mk...done. 00:02:14.779 Type 'make' to build. 00:02:14.779 13:44:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:02:14.779 13:44:03 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:14.779 13:44:03 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:14.779 13:44:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.779 ************************************ 00:02:14.779 START TEST make 00:02:14.779 ************************************ 00:02:14.779 13:44:03 -- common/autotest_common.sh@1104 -- $ make -j72 00:02:14.779 make[1]: Nothing to be done for 'all'. 00:02:15.037 The Meson build system 00:02:15.037 Version: 1.3.1 00:02:15.037 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:15.037 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:15.037 Build type: native build 00:02:15.037 Project name: libvfio-user 00:02:15.037 Project version: 0.0.1 00:02:15.037 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:15.037 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:15.037 Host machine cpu family: x86_64 00:02:15.037 Host machine cpu: x86_64 00:02:15.037 Run-time dependency threads found: YES 00:02:15.037 Library dl found: YES 00:02:15.037 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:15.037 Run-time dependency json-c found: YES 0.17 00:02:15.037 Run-time dependency cmocka found: YES 1.1.7 00:02:15.037 Program pytest-3 found: NO 00:02:15.037 Program flake8 found: NO 00:02:15.037 Program misspell-fixer found: NO 00:02:15.037 Program restructuredtext-lint found: NO 00:02:15.037 Program valgrind found: YES (/usr/bin/valgrind) 00:02:15.037 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.037 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.037 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.037 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:15.037 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:15.037 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:15.038 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:15.038 Build targets in project: 8 00:02:15.038 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:15.038 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:15.038 00:02:15.038 libvfio-user 0.0.1 00:02:15.038 00:02:15.038 User defined options 00:02:15.038 buildtype : debug 00:02:15.038 default_library: static 00:02:15.038 libdir : /usr/local/lib 00:02:15.038 00:02:15.038 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.603 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:15.603 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:15.603 [2/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:15.603 [3/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:15.603 [4/36] Compiling C object samples/null.p/null.c.o 00:02:15.603 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:15.603 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:15.603 [7/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:15.603 [8/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:15.603 [9/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:15.603 [10/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:15.603 [11/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:15.603 [12/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:15.603 [13/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:15.603 [14/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:15.603 [15/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:15.603 [16/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:15.603 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:15.603 [18/36] Compiling C object samples/server.p/server.c.o 00:02:15.603 [19/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:15.603 [20/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:15.603 [21/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:15.603 [22/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:15.603 [23/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:15.603 [24/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:15.603 [25/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:15.603 [26/36] Compiling C object samples/client.p/client.c.o 00:02:15.862 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:15.862 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:15.862 [29/36] Linking static target lib/libvfio-user.a 00:02:15.862 [30/36] Linking target samples/client 00:02:15.862 [31/36] Linking target test/unit_tests 00:02:15.862 [32/36] Linking target samples/gpio-pci-idio-16 00:02:15.862 [33/36] Linking target samples/lspci 00:02:15.862 [34/36] Linking target samples/server 00:02:15.862 [35/36] Linking target samples/null 00:02:15.862 [36/36] Linking target samples/shadow_ioeventfd_server 00:02:15.862 INFO: autodetecting backend as ninja 00:02:15.862 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:15.862 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:16.428 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:16.428 ninja: no work to do. 00:02:22.995 The Meson build system 00:02:22.995 Version: 1.3.1 00:02:22.995 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:22.995 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:22.995 Build type: native build 00:02:22.995 Program cat found: YES (/usr/bin/cat) 00:02:22.995 Project name: DPDK 00:02:22.995 Project version: 23.11.0 00:02:22.995 C compiler for the host machine: clang-16 (clang 16.0.6 "clang version 16.0.6 (Fedora 16.0.6-3.fc38)") 00:02:22.995 C linker for the host machine: clang-16 ld.bfd 2.39-16 00:02:22.995 Host machine cpu family: x86_64 00:02:22.995 Host machine cpu: x86_64 00:02:22.995 Message: ## Building in Developer Mode ## 00:02:22.995 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:22.995 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:22.995 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:22.995 Program python3 found: YES (/usr/bin/python3) 00:02:22.995 Program cat found: YES (/usr/bin/cat) 00:02:22.995 Compiler for C supports arguments -march=native: YES 00:02:22.995 Checking for size of "void *" : 8 00:02:22.995 Checking for size of "void *" : 8 (cached) 00:02:22.995 Library m found: YES 00:02:22.995 Library numa found: YES 00:02:22.995 Has header "numaif.h" : YES 00:02:22.995 Library fdt found: NO 00:02:22.995 Library execinfo found: NO 00:02:22.995 Has header "execinfo.h" : YES 00:02:22.995 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:22.995 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:22.995 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:22.995 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:22.995 Run-time dependency openssl found: YES 3.0.9 00:02:22.995 Run-time dependency libpcap found: YES 1.10.4 00:02:22.995 Has header "pcap.h" with dependency libpcap: YES 00:02:22.995 Compiler for C supports arguments -Wcast-qual: YES 00:02:22.995 Compiler for C supports arguments -Wdeprecated: YES 00:02:22.995 Compiler for C supports arguments -Wformat: YES 00:02:22.995 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:22.995 Compiler for C supports arguments -Wformat-security: YES 00:02:22.995 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.995 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:22.995 Compiler for C supports arguments -Wnested-externs: YES 00:02:22.995 Compiler for C supports arguments -Wold-style-definition: YES 00:02:22.995 Compiler for C supports arguments -Wpointer-arith: YES 00:02:22.995 Compiler for C supports arguments -Wsign-compare: YES 00:02:22.995 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:22.995 Compiler for C supports arguments -Wundef: YES 00:02:22.995 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.995 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:22.995 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:22.995 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.995 Program objdump found: YES (/usr/bin/objdump) 00:02:22.995 Compiler for C supports arguments -mavx512f: YES 00:02:22.995 Checking if "AVX512 checking" compiles: YES 00:02:22.995 Fetching value of define "__SSE4_2__" : 1 00:02:22.995 Fetching value of define "__AES__" : 1 00:02:22.995 Fetching value of define "__AVX__" : 1 00:02:22.995 Fetching value of define "__AVX2__" : 1 00:02:22.995 Fetching value of define "__AVX512BW__" : 1 00:02:22.995 Fetching value of define "__AVX512CD__" : 1 00:02:22.995 Fetching value of define "__AVX512DQ__" : 1 00:02:22.995 Fetching value of define "__AVX512F__" : 1 00:02:22.995 Fetching value of define "__AVX512VL__" : 1 00:02:22.995 Fetching value of define "__PCLMUL__" : 1 00:02:22.995 Fetching value of define "__RDRND__" : 1 00:02:22.995 Fetching value of define "__RDSEED__" : 1 00:02:22.995 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:22.995 Fetching value of define "__znver1__" : (undefined) 00:02:22.995 Fetching value of define "__znver2__" : (undefined) 00:02:22.995 Fetching value of define "__znver3__" : (undefined) 00:02:22.995 Fetching value of define "__znver4__" : (undefined) 00:02:22.995 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:22.995 Message: lib/log: Defining dependency "log" 00:02:22.995 Message: lib/kvargs: Defining dependency "kvargs" 00:02:22.995 Message: lib/telemetry: Defining dependency "telemetry" 00:02:22.995 Checking for function "getentropy" : NO 00:02:22.995 Message: lib/eal: Defining dependency "eal" 00:02:22.995 Message: lib/ring: Defining dependency "ring" 00:02:22.995 Message: lib/rcu: Defining dependency "rcu" 00:02:22.995 Message: lib/mempool: Defining dependency "mempool" 00:02:22.995 Message: lib/mbuf: Defining dependency "mbuf" 00:02:22.995 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:22.995 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:22.995 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:22.995 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:22.995 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:22.995 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:22.995 Compiler for C supports arguments -mpclmul: YES 00:02:22.995 Compiler for C supports arguments -maes: YES 00:02:22.995 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:22.995 Compiler for C supports arguments -mavx512bw: YES 00:02:22.995 Compiler for C supports arguments -mavx512dq: YES 00:02:22.995 Compiler for C supports arguments -mavx512vl: YES 00:02:22.995 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:22.995 Compiler for C supports arguments -mavx2: YES 00:02:22.995 Compiler for C supports arguments -mavx: YES 00:02:22.995 Message: lib/net: Defining dependency "net" 00:02:22.995 Message: lib/meter: Defining dependency "meter" 00:02:22.995 Message: lib/ethdev: Defining dependency "ethdev" 00:02:22.995 Message: lib/pci: Defining dependency "pci" 00:02:22.995 Message: lib/cmdline: Defining dependency "cmdline" 00:02:22.995 Message: lib/hash: Defining dependency "hash" 00:02:22.995 Message: lib/timer: Defining dependency "timer" 00:02:22.995 Message: lib/compressdev: Defining dependency "compressdev" 00:02:22.995 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:22.995 Message: lib/dmadev: Defining dependency "dmadev" 00:02:22.995 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:22.995 Message: lib/power: Defining dependency "power" 00:02:22.995 Message: lib/reorder: Defining dependency "reorder" 00:02:22.995 Message: lib/security: Defining dependency "security" 00:02:22.995 Has header "linux/userfaultfd.h" : YES 00:02:22.995 Has header "linux/vduse.h" : YES 00:02:22.995 Message: lib/vhost: Defining dependency "vhost" 00:02:22.995 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:22.995 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:22.995 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:22.995 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:22.995 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:22.995 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:22.995 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:22.995 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:22.995 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:22.995 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:22.995 Program doxygen found: YES (/usr/bin/doxygen) 00:02:22.995 Configuring doxy-api-html.conf using configuration 00:02:22.995 Configuring doxy-api-man.conf using configuration 00:02:22.995 Program mandb found: YES (/usr/bin/mandb) 00:02:22.995 Program sphinx-build found: NO 00:02:22.995 Configuring rte_build_config.h using configuration 00:02:22.995 Message: 00:02:22.995 ================= 00:02:22.995 Applications Enabled 00:02:22.995 ================= 00:02:22.995 00:02:22.995 apps: 00:02:22.995 00:02:22.995 00:02:22.995 Message: 00:02:22.995 ================= 00:02:22.995 Libraries Enabled 00:02:22.995 ================= 00:02:22.995 00:02:22.995 libs: 00:02:22.995 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:22.995 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:22.995 cryptodev, dmadev, power, reorder, security, vhost, 00:02:22.995 00:02:22.995 Message: 00:02:22.995 =============== 00:02:22.995 Drivers Enabled 00:02:22.995 =============== 00:02:22.995 00:02:22.995 common: 00:02:22.995 00:02:22.995 bus: 00:02:22.995 pci, vdev, 00:02:22.995 mempool: 00:02:22.995 ring, 00:02:22.995 dma: 00:02:22.995 00:02:22.995 net: 00:02:22.995 00:02:22.995 crypto: 00:02:22.995 00:02:22.995 compress: 00:02:22.995 00:02:22.995 vdpa: 00:02:22.995 00:02:22.995 00:02:22.995 Message: 00:02:22.995 ================= 00:02:22.995 Content Skipped 00:02:22.995 ================= 00:02:22.995 00:02:22.995 apps: 00:02:22.995 dumpcap: explicitly disabled via build config 00:02:22.995 graph: explicitly disabled via build config 00:02:22.995 pdump: explicitly disabled via build config 00:02:22.995 proc-info: explicitly disabled via build config 00:02:22.995 test-acl: explicitly disabled via build config 00:02:22.995 test-bbdev: explicitly disabled via build config 00:02:22.995 test-cmdline: explicitly disabled via build config 00:02:22.996 test-compress-perf: explicitly disabled via build config 00:02:22.996 test-crypto-perf: explicitly disabled via build config 00:02:22.996 test-dma-perf: explicitly disabled via build config 00:02:22.996 test-eventdev: explicitly disabled via build config 00:02:22.996 test-fib: explicitly disabled via build config 00:02:22.996 test-flow-perf: explicitly disabled via build config 00:02:22.996 test-gpudev: explicitly disabled via build config 00:02:22.996 test-mldev: explicitly disabled via build config 00:02:22.996 test-pipeline: explicitly disabled via build config 00:02:22.996 test-pmd: explicitly disabled via build config 00:02:22.996 test-regex: explicitly disabled via build config 00:02:22.996 test-sad: explicitly disabled via build config 00:02:22.996 test-security-perf: explicitly disabled via build config 00:02:22.996 00:02:22.996 libs: 00:02:22.996 metrics: explicitly disabled via build config 00:02:22.996 acl: explicitly disabled via build config 00:02:22.996 bbdev: explicitly disabled via build config 00:02:22.996 bitratestats: explicitly disabled via build config 00:02:22.996 bpf: explicitly disabled via build config 00:02:22.996 cfgfile: explicitly disabled via build config 00:02:22.996 distributor: explicitly disabled via build config 00:02:22.996 efd: explicitly disabled via build config 00:02:22.996 eventdev: explicitly disabled via build config 00:02:22.996 dispatcher: explicitly disabled via build config 00:02:22.996 gpudev: explicitly disabled via build config 00:02:22.996 gro: explicitly disabled via build config 00:02:22.996 gso: explicitly disabled via build config 00:02:22.996 ip_frag: explicitly disabled via build config 00:02:22.996 jobstats: explicitly disabled via build config 00:02:22.996 latencystats: explicitly disabled via build config 00:02:22.996 lpm: explicitly disabled via build config 00:02:22.996 member: explicitly disabled via build config 00:02:22.996 pcapng: explicitly disabled via build config 00:02:22.996 rawdev: explicitly disabled via build config 00:02:22.996 regexdev: explicitly disabled via build config 00:02:22.996 mldev: explicitly disabled via build config 00:02:22.996 rib: explicitly disabled via build config 00:02:22.996 sched: explicitly disabled via build config 00:02:22.996 stack: explicitly disabled via build config 00:02:22.996 ipsec: explicitly disabled via build config 00:02:22.996 pdcp: explicitly disabled via build config 00:02:22.996 fib: explicitly disabled via build config 00:02:22.996 port: explicitly disabled via build config 00:02:22.996 pdump: explicitly disabled via build config 00:02:22.996 table: explicitly disabled via build config 00:02:22.996 pipeline: explicitly disabled via build config 00:02:22.996 graph: explicitly disabled via build config 00:02:22.996 node: explicitly disabled via build config 00:02:22.996 00:02:22.996 drivers: 00:02:22.996 common/cpt: not in enabled drivers build config 00:02:22.996 common/dpaax: not in enabled drivers build config 00:02:22.996 common/iavf: not in enabled drivers build config 00:02:22.996 common/idpf: not in enabled drivers build config 00:02:22.996 common/mvep: not in enabled drivers build config 00:02:22.996 common/octeontx: not in enabled drivers build config 00:02:22.996 bus/auxiliary: not in enabled drivers build config 00:02:22.996 bus/cdx: not in enabled drivers build config 00:02:22.996 bus/dpaa: not in enabled drivers build config 00:02:22.996 bus/fslmc: not in enabled drivers build config 00:02:22.996 bus/ifpga: not in enabled drivers build config 00:02:22.996 bus/platform: not in enabled drivers build config 00:02:22.996 bus/vmbus: not in enabled drivers build config 00:02:22.996 common/cnxk: not in enabled drivers build config 00:02:22.996 common/mlx5: not in enabled drivers build config 00:02:22.996 common/nfp: not in enabled drivers build config 00:02:22.996 common/qat: not in enabled drivers build config 00:02:22.996 common/sfc_efx: not in enabled drivers build config 00:02:22.996 mempool/bucket: not in enabled drivers build config 00:02:22.996 mempool/cnxk: not in enabled drivers build config 00:02:22.996 mempool/dpaa: not in enabled drivers build config 00:02:22.996 mempool/dpaa2: not in enabled drivers build config 00:02:22.996 mempool/octeontx: not in enabled drivers build config 00:02:22.996 mempool/stack: not in enabled drivers build config 00:02:22.996 dma/cnxk: not in enabled drivers build config 00:02:22.996 dma/dpaa: not in enabled drivers build config 00:02:22.996 dma/dpaa2: not in enabled drivers build config 00:02:22.996 dma/hisilicon: not in enabled drivers build config 00:02:22.996 dma/idxd: not in enabled drivers build config 00:02:22.996 dma/ioat: not in enabled drivers build config 00:02:22.996 dma/skeleton: not in enabled drivers build config 00:02:22.996 net/af_packet: not in enabled drivers build config 00:02:22.996 net/af_xdp: not in enabled drivers build config 00:02:22.996 net/ark: not in enabled drivers build config 00:02:22.996 net/atlantic: not in enabled drivers build config 00:02:22.996 net/avp: not in enabled drivers build config 00:02:22.996 net/axgbe: not in enabled drivers build config 00:02:22.996 net/bnx2x: not in enabled drivers build config 00:02:22.996 net/bnxt: not in enabled drivers build config 00:02:22.996 net/bonding: not in enabled drivers build config 00:02:22.996 net/cnxk: not in enabled drivers build config 00:02:22.996 net/cpfl: not in enabled drivers build config 00:02:22.996 net/cxgbe: not in enabled drivers build config 00:02:22.996 net/dpaa: not in enabled drivers build config 00:02:22.996 net/dpaa2: not in enabled drivers build config 00:02:22.996 net/e1000: not in enabled drivers build config 00:02:22.996 net/ena: not in enabled drivers build config 00:02:22.996 net/enetc: not in enabled drivers build config 00:02:22.996 net/enetfec: not in enabled drivers build config 00:02:22.996 net/enic: not in enabled drivers build config 00:02:22.996 net/failsafe: not in enabled drivers build config 00:02:22.996 net/fm10k: not in enabled drivers build config 00:02:22.996 net/gve: not in enabled drivers build config 00:02:22.996 net/hinic: not in enabled drivers build config 00:02:22.996 net/hns3: not in enabled drivers build config 00:02:22.996 net/i40e: not in enabled drivers build config 00:02:22.996 net/iavf: not in enabled drivers build config 00:02:22.996 net/ice: not in enabled drivers build config 00:02:22.996 net/idpf: not in enabled drivers build config 00:02:22.996 net/igc: not in enabled drivers build config 00:02:22.996 net/ionic: not in enabled drivers build config 00:02:22.996 net/ipn3ke: not in enabled drivers build config 00:02:22.996 net/ixgbe: not in enabled drivers build config 00:02:22.996 net/mana: not in enabled drivers build config 00:02:22.996 net/memif: not in enabled drivers build config 00:02:22.996 net/mlx4: not in enabled drivers build config 00:02:22.996 net/mlx5: not in enabled drivers build config 00:02:22.996 net/mvneta: not in enabled drivers build config 00:02:22.996 net/mvpp2: not in enabled drivers build config 00:02:22.996 net/netvsc: not in enabled drivers build config 00:02:22.996 net/nfb: not in enabled drivers build config 00:02:22.996 net/nfp: not in enabled drivers build config 00:02:22.996 net/ngbe: not in enabled drivers build config 00:02:22.996 net/null: not in enabled drivers build config 00:02:22.996 net/octeontx: not in enabled drivers build config 00:02:22.996 net/octeon_ep: not in enabled drivers build config 00:02:22.996 net/pcap: not in enabled drivers build config 00:02:22.996 net/pfe: not in enabled drivers build config 00:02:22.996 net/qede: not in enabled drivers build config 00:02:22.996 net/ring: not in enabled drivers build config 00:02:22.996 net/sfc: not in enabled drivers build config 00:02:22.996 net/softnic: not in enabled drivers build config 00:02:22.996 net/tap: not in enabled drivers build config 00:02:22.996 net/thunderx: not in enabled drivers build config 00:02:22.996 net/txgbe: not in enabled drivers build config 00:02:22.996 net/vdev_netvsc: not in enabled drivers build config 00:02:22.996 net/vhost: not in enabled drivers build config 00:02:22.996 net/virtio: not in enabled drivers build config 00:02:22.996 net/vmxnet3: not in enabled drivers build config 00:02:22.996 raw/*: missing internal dependency, "rawdev" 00:02:22.996 crypto/armv8: not in enabled drivers build config 00:02:22.996 crypto/bcmfs: not in enabled drivers build config 00:02:22.996 crypto/caam_jr: not in enabled drivers build config 00:02:22.996 crypto/ccp: not in enabled drivers build config 00:02:22.996 crypto/cnxk: not in enabled drivers build config 00:02:22.996 crypto/dpaa_sec: not in enabled drivers build config 00:02:22.996 crypto/dpaa2_sec: not in enabled drivers build config 00:02:22.996 crypto/ipsec_mb: not in enabled drivers build config 00:02:22.996 crypto/mlx5: not in enabled drivers build config 00:02:22.996 crypto/mvsam: not in enabled drivers build config 00:02:22.996 crypto/nitrox: not in enabled drivers build config 00:02:22.996 crypto/null: not in enabled drivers build config 00:02:22.996 crypto/octeontx: not in enabled drivers build config 00:02:22.996 crypto/openssl: not in enabled drivers build config 00:02:22.996 crypto/scheduler: not in enabled drivers build config 00:02:22.996 crypto/uadk: not in enabled drivers build config 00:02:22.996 crypto/virtio: not in enabled drivers build config 00:02:22.996 compress/isal: not in enabled drivers build config 00:02:22.996 compress/mlx5: not in enabled drivers build config 00:02:22.996 compress/octeontx: not in enabled drivers build config 00:02:22.996 compress/zlib: not in enabled drivers build config 00:02:22.996 regex/*: missing internal dependency, "regexdev" 00:02:22.996 ml/*: missing internal dependency, "mldev" 00:02:22.996 vdpa/ifc: not in enabled drivers build config 00:02:22.996 vdpa/mlx5: not in enabled drivers build config 00:02:22.996 vdpa/nfp: not in enabled drivers build config 00:02:22.996 vdpa/sfc: not in enabled drivers build config 00:02:22.996 event/*: missing internal dependency, "eventdev" 00:02:22.996 baseband/*: missing internal dependency, "bbdev" 00:02:22.996 gpu/*: missing internal dependency, "gpudev" 00:02:22.996 00:02:22.996 00:02:23.256 Build targets in project: 85 00:02:23.256 00:02:23.256 DPDK 23.11.0 00:02:23.256 00:02:23.256 User defined options 00:02:23.256 buildtype : debug 00:02:23.256 default_library : static 00:02:23.256 libdir : lib 00:02:23.256 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:23.256 c_args : -fPIC -Werror 00:02:23.256 c_link_args : 00:02:23.256 cpu_instruction_set: native 00:02:23.256 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:23.256 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:02:23.256 enable_docs : false 00:02:23.256 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:23.256 enable_kmods : false 00:02:23.256 tests : false 00:02:23.256 00:02:23.256 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.523 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:23.784 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.784 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:23.784 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:23.784 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.784 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.784 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.784 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.784 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.784 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.784 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.784 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.784 [12/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.784 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.784 [14/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.784 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.784 [16/265] Linking static target lib/librte_kvargs.a 00:02:23.784 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.784 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.784 [19/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:23.784 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:23.784 [21/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.784 [22/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.784 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.784 [24/265] Linking static target lib/librte_log.a 00:02:23.784 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:24.047 [26/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.047 [27/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:24.310 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.310 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.310 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.310 [31/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:24.310 [32/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:24.310 [33/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.310 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.310 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.310 [36/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.310 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.310 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.310 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.310 [40/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.310 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.310 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:24.310 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.310 [44/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:24.310 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.310 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.310 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.310 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:24.310 [49/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:24.310 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.310 [51/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.311 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:24.311 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:24.311 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.311 [55/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.311 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.311 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:24.311 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.311 [59/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.311 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.311 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.311 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.311 [63/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.311 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.311 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.311 [66/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.311 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.311 [68/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.311 [69/265] Linking static target lib/librte_telemetry.a 00:02:24.311 [70/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:24.311 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.311 [72/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.311 [73/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.311 [74/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.311 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.311 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:24.311 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.311 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.311 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.311 [80/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.311 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.311 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.311 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.311 [84/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:24.311 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.311 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.311 [87/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.570 [88/265] Linking static target lib/librte_pci.a 00:02:24.570 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.570 [90/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.570 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.570 [92/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.570 [93/265] Linking static target lib/librte_ring.a 00:02:24.570 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.570 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.570 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.570 [97/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.570 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.570 [99/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.570 [100/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.570 [101/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.570 [102/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.570 [103/265] Linking static target lib/librte_meter.a 00:02:24.570 [104/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.570 [105/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.570 [106/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.570 [107/265] Linking target lib/librte_log.so.24.0 00:02:24.570 [108/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:24.570 [109/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.570 [110/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.570 [111/265] Linking static target lib/librte_eal.a 00:02:24.570 [112/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.570 [113/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.570 [114/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:24.570 [115/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.570 [116/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.570 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.570 [118/265] Linking static target lib/librte_mempool.a 00:02:24.570 [119/265] Linking static target lib/librte_rcu.a 00:02:24.570 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:24.570 [121/265] Linking static target lib/librte_net.a 00:02:24.570 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.829 [123/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.829 [124/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:24.829 [125/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.829 [126/265] Linking static target lib/librte_mbuf.a 00:02:24.829 [127/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.829 [128/265] Linking target lib/librte_kvargs.so.24.0 00:02:24.829 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.829 [130/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.829 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.829 [132/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.088 [133/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.088 [134/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.088 [135/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.088 [136/265] Linking target lib/librte_telemetry.so.24.0 00:02:25.088 [137/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:25.088 [138/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.088 [139/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:25.088 [140/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:25.088 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:25.088 [142/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:25.088 [143/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:25.088 [144/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:25.088 [145/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:25.088 [146/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:25.088 [147/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:25.088 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:25.088 [149/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.088 [150/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:25.088 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:25.088 [152/265] Linking static target lib/librte_timer.a 00:02:25.088 [153/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:25.088 [154/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:25.088 [155/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.088 [156/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.088 [157/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:25.088 [158/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:25.088 [159/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.088 [160/265] Linking static target lib/librte_reorder.a 00:02:25.088 [161/265] Linking static target lib/librte_cmdline.a 00:02:25.088 [162/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:25.088 [163/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:25.088 [164/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:25.088 [165/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:25.088 [166/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:25.088 [167/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:25.088 [168/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:25.088 [169/265] Linking static target lib/librte_power.a 00:02:25.088 [170/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.088 [171/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.088 [172/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:25.348 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:25.348 [174/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.348 [175/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:25.348 [176/265] Linking static target lib/librte_dmadev.a 00:02:25.348 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.348 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:25.348 [179/265] Linking static target lib/librte_compressdev.a 00:02:25.348 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.348 [181/265] Linking static target lib/librte_hash.a 00:02:25.348 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.348 [183/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:25.348 [184/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.348 [185/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:25.348 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:25.348 [187/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:25.348 [188/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:25.348 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.348 [190/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:25.348 [191/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.348 [192/265] Linking static target lib/librte_security.a 00:02:25.348 [193/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:25.348 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.348 [195/265] Linking static target lib/librte_cryptodev.a 00:02:25.348 [196/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.348 [197/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:25.348 [198/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.607 [199/265] Linking static target lib/librte_ethdev.a 00:02:25.607 [200/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.607 [201/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:25.607 [202/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.607 [203/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.607 [204/265] Linking static target drivers/librte_bus_vdev.a 00:02:25.607 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:25.607 [206/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.607 [207/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.607 [208/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:25.607 [209/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.607 [210/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.607 [211/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.607 [212/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.607 [213/265] Linking static target drivers/librte_bus_pci.a 00:02:25.607 [214/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.607 [215/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.607 [216/265] Linking static target drivers/librte_mempool_ring.a 00:02:25.866 [217/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.866 [218/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.126 [219/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.126 [220/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.126 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.385 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.645 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.645 [224/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.645 [225/265] Linking static target lib/librte_vhost.a 00:02:26.645 [226/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.025 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.964 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.532 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.132 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.132 [231/265] Linking target lib/librte_eal.so.24.0 00:02:38.132 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:38.132 [233/265] Linking target lib/librte_timer.so.24.0 00:02:38.132 [234/265] Linking target lib/librte_ring.so.24.0 00:02:38.132 [235/265] Linking target lib/librte_meter.so.24.0 00:02:38.132 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:38.132 [237/265] Linking target lib/librte_dmadev.so.24.0 00:02:38.132 [238/265] Linking target lib/librte_pci.so.24.0 00:02:38.132 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:38.132 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:38.132 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:38.132 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:38.132 [243/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:38.390 [244/265] Linking target lib/librte_rcu.so.24.0 00:02:38.390 [245/265] Linking target lib/librte_mempool.so.24.0 00:02:38.390 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:38.390 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:38.390 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:38.648 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:38.648 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:38.648 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:38.908 [252/265] Linking target lib/librte_reorder.so.24.0 00:02:38.908 [253/265] Linking target lib/librte_net.so.24.0 00:02:38.908 [254/265] Linking target lib/librte_compressdev.so.24.0 00:02:38.908 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:38.908 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:38.908 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:39.166 [258/265] Linking target lib/librte_hash.so.24.0 00:02:39.166 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:39.166 [260/265] Linking target lib/librte_security.so.24.0 00:02:39.166 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:39.166 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:39.166 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:39.426 [264/265] Linking target lib/librte_power.so.24.0 00:02:39.426 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:39.426 INFO: autodetecting backend as ninja 00:02:39.426 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:40.360 CC lib/log/log.o 00:02:40.360 CC lib/log/log_flags.o 00:02:40.360 CC lib/log/log_deprecated.o 00:02:40.360 CC lib/ut_mock/mock.o 00:02:40.360 CC lib/ut/ut.o 00:02:40.619 LIB libspdk_ut_mock.a 00:02:40.619 LIB libspdk_log.a 00:02:40.619 LIB libspdk_ut.a 00:02:40.878 CC lib/util/base64.o 00:02:40.878 CC lib/util/cpuset.o 00:02:40.878 CC lib/util/bit_array.o 00:02:40.878 CC lib/util/crc16.o 00:02:40.878 CC lib/util/crc32.o 00:02:40.878 CC lib/util/crc32c.o 00:02:40.878 CC lib/util/crc32_ieee.o 00:02:40.878 CC lib/util/crc64.o 00:02:40.878 CC lib/util/dif.o 00:02:40.878 CC lib/util/fd.o 00:02:40.878 CC lib/util/file.o 00:02:40.878 CC lib/dma/dma.o 00:02:40.878 CC lib/util/hexlify.o 00:02:40.878 CXX lib/trace_parser/trace.o 00:02:40.878 CC lib/util/iov.o 00:02:40.878 CC lib/util/math.o 00:02:40.878 CC lib/util/pipe.o 00:02:40.878 CC lib/util/strerror_tls.o 00:02:40.878 CC lib/util/string.o 00:02:40.878 CC lib/util/uuid.o 00:02:40.878 CC lib/util/fd_group.o 00:02:40.878 CC lib/util/xor.o 00:02:40.878 CC lib/util/zipf.o 00:02:40.878 CC lib/ioat/ioat.o 00:02:41.135 CC lib/vfio_user/host/vfio_user_pci.o 00:02:41.135 CC lib/vfio_user/host/vfio_user.o 00:02:41.135 LIB libspdk_dma.a 00:02:41.135 LIB libspdk_ioat.a 00:02:41.393 LIB libspdk_vfio_user.a 00:02:41.393 LIB libspdk_util.a 00:02:41.652 LIB libspdk_trace_parser.a 00:02:41.652 CC lib/json/json_util.o 00:02:41.652 CC lib/json/json_parse.o 00:02:41.652 CC lib/json/json_write.o 00:02:41.652 CC lib/rdma/common.o 00:02:41.652 CC lib/vmd/vmd.o 00:02:41.652 CC lib/rdma/rdma_verbs.o 00:02:41.652 CC lib/vmd/led.o 00:02:41.652 CC lib/conf/conf.o 00:02:41.652 CC lib/idxd/idxd.o 00:02:41.652 CC lib/env_dpdk/env.o 00:02:41.652 CC lib/env_dpdk/memory.o 00:02:41.652 CC lib/idxd/idxd_user.o 00:02:41.652 CC lib/idxd/idxd_kernel.o 00:02:41.652 CC lib/env_dpdk/pci.o 00:02:41.652 CC lib/env_dpdk/init.o 00:02:41.652 CC lib/env_dpdk/threads.o 00:02:41.652 CC lib/env_dpdk/pci_ioat.o 00:02:41.652 CC lib/env_dpdk/pci_virtio.o 00:02:41.652 CC lib/env_dpdk/pci_vmd.o 00:02:41.652 CC lib/env_dpdk/pci_event.o 00:02:41.652 CC lib/env_dpdk/pci_idxd.o 00:02:41.652 CC lib/env_dpdk/sigbus_handler.o 00:02:41.652 CC lib/env_dpdk/pci_dpdk.o 00:02:41.652 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:41.652 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:41.910 LIB libspdk_rdma.a 00:02:41.910 LIB libspdk_conf.a 00:02:41.910 LIB libspdk_json.a 00:02:42.169 LIB libspdk_idxd.a 00:02:42.169 LIB libspdk_vmd.a 00:02:42.169 CC lib/jsonrpc/jsonrpc_server.o 00:02:42.169 CC lib/jsonrpc/jsonrpc_client.o 00:02:42.169 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:42.169 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:42.427 LIB libspdk_jsonrpc.a 00:02:42.996 CC lib/rpc/rpc.o 00:02:42.996 LIB libspdk_env_dpdk.a 00:02:42.996 LIB libspdk_rpc.a 00:02:43.565 CC lib/trace/trace.o 00:02:43.565 CC lib/trace/trace_flags.o 00:02:43.565 CC lib/trace/trace_rpc.o 00:02:43.565 CC lib/sock/sock.o 00:02:43.565 CC lib/sock/sock_rpc.o 00:02:43.565 CC lib/notify/notify.o 00:02:43.565 CC lib/notify/notify_rpc.o 00:02:43.565 LIB libspdk_notify.a 00:02:43.565 LIB libspdk_trace.a 00:02:43.825 LIB libspdk_sock.a 00:02:43.825 CC lib/thread/thread.o 00:02:43.825 CC lib/thread/iobuf.o 00:02:44.084 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:44.084 CC lib/nvme/nvme_ctrlr.o 00:02:44.084 CC lib/nvme/nvme_fabric.o 00:02:44.084 CC lib/nvme/nvme_ns.o 00:02:44.084 CC lib/nvme/nvme_ns_cmd.o 00:02:44.084 CC lib/nvme/nvme_pcie_common.o 00:02:44.084 CC lib/nvme/nvme_pcie.o 00:02:44.084 CC lib/nvme/nvme.o 00:02:44.084 CC lib/nvme/nvme_qpair.o 00:02:44.084 CC lib/nvme/nvme_quirks.o 00:02:44.084 CC lib/nvme/nvme_transport.o 00:02:44.084 CC lib/nvme/nvme_discovery.o 00:02:44.084 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:44.084 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:44.084 CC lib/nvme/nvme_tcp.o 00:02:44.084 CC lib/nvme/nvme_opal.o 00:02:44.084 CC lib/nvme/nvme_io_msg.o 00:02:44.084 CC lib/nvme/nvme_poll_group.o 00:02:44.084 CC lib/nvme/nvme_zns.o 00:02:44.084 CC lib/nvme/nvme_cuse.o 00:02:44.084 CC lib/nvme/nvme_vfio_user.o 00:02:44.084 CC lib/nvme/nvme_rdma.o 00:02:45.023 LIB libspdk_thread.a 00:02:45.282 CC lib/accel/accel.o 00:02:45.282 CC lib/blob/blobstore.o 00:02:45.282 CC lib/accel/accel_rpc.o 00:02:45.282 CC lib/accel/accel_sw.o 00:02:45.282 CC lib/blob/zeroes.o 00:02:45.282 CC lib/blob/request.o 00:02:45.282 CC lib/blob/blob_bs_dev.o 00:02:45.282 CC lib/virtio/virtio.o 00:02:45.282 CC lib/virtio/virtio_vfio_user.o 00:02:45.282 CC lib/virtio/virtio_vhost_user.o 00:02:45.282 CC lib/virtio/virtio_pci.o 00:02:45.282 CC lib/init/json_config.o 00:02:45.282 CC lib/init/subsystem.o 00:02:45.282 CC lib/init/subsystem_rpc.o 00:02:45.282 CC lib/init/rpc.o 00:02:45.282 CC lib/vfu_tgt/tgt_endpoint.o 00:02:45.282 CC lib/vfu_tgt/tgt_rpc.o 00:02:45.541 LIB libspdk_init.a 00:02:45.541 LIB libspdk_virtio.a 00:02:45.800 LIB libspdk_vfu_tgt.a 00:02:45.801 LIB libspdk_nvme.a 00:02:45.801 CC lib/event/app.o 00:02:45.801 CC lib/event/reactor.o 00:02:45.801 CC lib/event/log_rpc.o 00:02:45.801 CC lib/event/app_rpc.o 00:02:45.801 CC lib/event/scheduler_static.o 00:02:46.369 LIB libspdk_event.a 00:02:46.369 LIB libspdk_accel.a 00:02:46.629 CC lib/bdev/bdev.o 00:02:46.629 CC lib/bdev/bdev_rpc.o 00:02:46.629 CC lib/bdev/bdev_zone.o 00:02:46.629 CC lib/bdev/part.o 00:02:46.629 CC lib/bdev/scsi_nvme.o 00:02:47.567 LIB libspdk_blob.a 00:02:48.136 CC lib/blobfs/blobfs.o 00:02:48.136 CC lib/blobfs/tree.o 00:02:48.136 CC lib/lvol/lvol.o 00:02:48.704 LIB libspdk_lvol.a 00:02:48.704 LIB libspdk_blobfs.a 00:02:49.273 LIB libspdk_bdev.a 00:02:49.538 CC lib/nbd/nbd.o 00:02:49.538 CC lib/nbd/nbd_rpc.o 00:02:49.538 CC lib/nvmf/ctrlr.o 00:02:49.538 CC lib/scsi/dev.o 00:02:49.538 CC lib/nvmf/ctrlr_discovery.o 00:02:49.538 CC lib/scsi/lun.o 00:02:49.538 CC lib/ublk/ublk.o 00:02:49.538 CC lib/nvmf/ctrlr_bdev.o 00:02:49.538 CC lib/scsi/port.o 00:02:49.538 CC lib/ftl/ftl_core.o 00:02:49.538 CC lib/ublk/ublk_rpc.o 00:02:49.538 CC lib/nvmf/subsystem.o 00:02:49.538 CC lib/scsi/scsi.o 00:02:49.538 CC lib/ftl/ftl_init.o 00:02:49.538 CC lib/nvmf/nvmf.o 00:02:49.538 CC lib/scsi/scsi_bdev.o 00:02:49.538 CC lib/ftl/ftl_layout.o 00:02:49.538 CC lib/nvmf/nvmf_rpc.o 00:02:49.538 CC lib/scsi/scsi_pr.o 00:02:49.538 CC lib/ftl/ftl_debug.o 00:02:49.538 CC lib/nvmf/transport.o 00:02:49.538 CC lib/scsi/scsi_rpc.o 00:02:49.538 CC lib/ftl/ftl_io.o 00:02:49.538 CC lib/scsi/task.o 00:02:49.538 CC lib/nvmf/tcp.o 00:02:49.538 CC lib/ftl/ftl_sb.o 00:02:49.538 CC lib/nvmf/vfio_user.o 00:02:49.538 CC lib/ftl/ftl_l2p.o 00:02:49.538 CC lib/nvmf/rdma.o 00:02:49.538 CC lib/ftl/ftl_l2p_flat.o 00:02:49.538 CC lib/ftl/ftl_nv_cache.o 00:02:49.538 CC lib/ftl/ftl_band.o 00:02:49.538 CC lib/ftl/ftl_band_ops.o 00:02:49.538 CC lib/ftl/ftl_writer.o 00:02:49.538 CC lib/ftl/ftl_rq.o 00:02:49.538 CC lib/ftl/ftl_reloc.o 00:02:49.538 CC lib/ftl/ftl_l2p_cache.o 00:02:49.538 CC lib/ftl/ftl_p2l.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:49.538 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:49.539 CC lib/ftl/utils/ftl_conf.o 00:02:49.539 CC lib/ftl/utils/ftl_md.o 00:02:49.539 CC lib/ftl/utils/ftl_mempool.o 00:02:49.539 CC lib/ftl/utils/ftl_bitmap.o 00:02:49.539 CC lib/ftl/utils/ftl_property.o 00:02:49.539 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:49.539 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:49.539 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:49.539 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:49.539 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:49.539 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:49.539 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:49.539 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:49.539 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:49.539 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:49.539 CC lib/ftl/base/ftl_base_bdev.o 00:02:49.539 CC lib/ftl/base/ftl_base_dev.o 00:02:49.539 CC lib/ftl/ftl_trace.o 00:02:50.108 LIB libspdk_nbd.a 00:02:50.108 LIB libspdk_ublk.a 00:02:50.108 LIB libspdk_scsi.a 00:02:50.367 CC lib/iscsi/conn.o 00:02:50.367 CC lib/iscsi/init_grp.o 00:02:50.367 CC lib/iscsi/iscsi.o 00:02:50.367 CC lib/iscsi/md5.o 00:02:50.367 CC lib/iscsi/portal_grp.o 00:02:50.367 CC lib/iscsi/param.o 00:02:50.367 CC lib/iscsi/tgt_node.o 00:02:50.367 CC lib/iscsi/iscsi_subsystem.o 00:02:50.367 CC lib/vhost/vhost.o 00:02:50.367 CC lib/iscsi/iscsi_rpc.o 00:02:50.367 CC lib/iscsi/task.o 00:02:50.367 CC lib/vhost/vhost_rpc.o 00:02:50.367 CC lib/vhost/vhost_scsi.o 00:02:50.367 CC lib/vhost/vhost_blk.o 00:02:50.367 CC lib/vhost/rte_vhost_user.o 00:02:50.367 LIB libspdk_ftl.a 00:02:51.305 LIB libspdk_iscsi.a 00:02:51.305 LIB libspdk_nvmf.a 00:02:51.305 LIB libspdk_vhost.a 00:02:51.874 CC module/env_dpdk/env_dpdk_rpc.o 00:02:51.874 CC module/vfu_device/vfu_virtio.o 00:02:51.874 CC module/vfu_device/vfu_virtio_blk.o 00:02:51.874 CC module/vfu_device/vfu_virtio_scsi.o 00:02:51.874 CC module/vfu_device/vfu_virtio_rpc.o 00:02:51.874 CC module/accel/iaa/accel_iaa.o 00:02:51.874 CC module/accel/iaa/accel_iaa_rpc.o 00:02:51.874 CC module/sock/posix/posix.o 00:02:51.874 CC module/scheduler/gscheduler/gscheduler.o 00:02:51.874 CC module/accel/error/accel_error.o 00:02:51.874 LIB libspdk_env_dpdk_rpc.a 00:02:51.874 CC module/accel/error/accel_error_rpc.o 00:02:51.874 CC module/accel/ioat/accel_ioat.o 00:02:51.874 CC module/accel/ioat/accel_ioat_rpc.o 00:02:51.874 CC module/accel/dsa/accel_dsa.o 00:02:51.874 CC module/accel/dsa/accel_dsa_rpc.o 00:02:51.874 CC module/blob/bdev/blob_bdev.o 00:02:51.874 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:51.874 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:52.133 LIB libspdk_scheduler_gscheduler.a 00:02:52.133 LIB libspdk_scheduler_dpdk_governor.a 00:02:52.133 LIB libspdk_accel_error.a 00:02:52.133 LIB libspdk_accel_iaa.a 00:02:52.133 LIB libspdk_accel_ioat.a 00:02:52.133 LIB libspdk_scheduler_dynamic.a 00:02:52.133 LIB libspdk_accel_dsa.a 00:02:52.133 LIB libspdk_blob_bdev.a 00:02:52.391 LIB libspdk_vfu_device.a 00:02:52.648 LIB libspdk_sock_posix.a 00:02:52.648 CC module/blobfs/bdev/blobfs_bdev.o 00:02:52.648 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:52.648 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:52.648 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:52.648 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:52.648 CC module/bdev/lvol/vbdev_lvol.o 00:02:52.648 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:52.648 CC module/bdev/ftl/bdev_ftl.o 00:02:52.648 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:52.648 CC module/bdev/delay/vbdev_delay.o 00:02:52.648 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:52.648 CC module/bdev/raid/bdev_raid_rpc.o 00:02:52.648 CC module/bdev/raid/bdev_raid.o 00:02:52.648 CC module/bdev/raid/bdev_raid_sb.o 00:02:52.648 CC module/bdev/aio/bdev_aio_rpc.o 00:02:52.648 CC module/bdev/aio/bdev_aio.o 00:02:52.648 CC module/bdev/nvme/bdev_nvme.o 00:02:52.648 CC module/bdev/raid/raid0.o 00:02:52.648 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:52.648 CC module/bdev/raid/concat.o 00:02:52.648 CC module/bdev/raid/raid1.o 00:02:52.648 CC module/bdev/nvme/nvme_rpc.o 00:02:52.648 CC module/bdev/iscsi/bdev_iscsi.o 00:02:52.648 CC module/bdev/split/vbdev_split.o 00:02:52.648 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:52.648 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:52.648 CC module/bdev/split/vbdev_split_rpc.o 00:02:52.648 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:52.648 CC module/bdev/malloc/bdev_malloc.o 00:02:52.648 CC module/bdev/null/bdev_null.o 00:02:52.648 CC module/bdev/nvme/bdev_mdns_client.o 00:02:52.648 CC module/bdev/null/bdev_null_rpc.o 00:02:52.648 CC module/bdev/gpt/gpt.o 00:02:52.648 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:52.648 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:52.648 CC module/bdev/nvme/vbdev_opal.o 00:02:52.648 CC module/bdev/gpt/vbdev_gpt.o 00:02:52.648 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:52.648 CC module/bdev/passthru/vbdev_passthru.o 00:02:52.648 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:52.648 CC module/bdev/error/vbdev_error.o 00:02:52.648 CC module/bdev/error/vbdev_error_rpc.o 00:02:52.906 LIB libspdk_blobfs_bdev.a 00:02:52.906 LIB libspdk_bdev_null.a 00:02:52.906 LIB libspdk_bdev_split.a 00:02:52.906 LIB libspdk_bdev_error.a 00:02:52.906 LIB libspdk_bdev_passthru.a 00:02:52.906 LIB libspdk_bdev_aio.a 00:02:52.906 LIB libspdk_bdev_zone_block.a 00:02:52.906 LIB libspdk_bdev_iscsi.a 00:02:52.906 LIB libspdk_bdev_delay.a 00:02:52.906 LIB libspdk_bdev_gpt.a 00:02:52.906 LIB libspdk_bdev_ftl.a 00:02:53.164 LIB libspdk_bdev_lvol.a 00:02:53.164 LIB libspdk_bdev_malloc.a 00:02:53.164 LIB libspdk_bdev_virtio.a 00:02:53.423 LIB libspdk_bdev_raid.a 00:02:54.361 LIB libspdk_bdev_nvme.a 00:02:54.997 CC module/event/subsystems/sock/sock.o 00:02:54.997 CC module/event/subsystems/vmd/vmd.o 00:02:54.997 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:54.997 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:54.997 CC module/event/subsystems/scheduler/scheduler.o 00:02:54.997 CC module/event/subsystems/iobuf/iobuf.o 00:02:54.997 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:54.997 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:55.256 LIB libspdk_event_sock.a 00:02:55.256 LIB libspdk_event_vhost_blk.a 00:02:55.256 LIB libspdk_event_scheduler.a 00:02:55.256 LIB libspdk_event_vfu_tgt.a 00:02:55.256 LIB libspdk_event_vmd.a 00:02:55.256 LIB libspdk_event_iobuf.a 00:02:55.514 CC module/event/subsystems/accel/accel.o 00:02:55.774 LIB libspdk_event_accel.a 00:02:56.033 CC module/event/subsystems/bdev/bdev.o 00:02:56.292 LIB libspdk_event_bdev.a 00:02:56.550 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:56.550 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:56.550 CC module/event/subsystems/scsi/scsi.o 00:02:56.550 CC module/event/subsystems/nbd/nbd.o 00:02:56.550 CC module/event/subsystems/ublk/ublk.o 00:02:56.550 LIB libspdk_event_nbd.a 00:02:56.550 LIB libspdk_event_ublk.a 00:02:56.550 LIB libspdk_event_scsi.a 00:02:56.809 LIB libspdk_event_nvmf.a 00:02:57.067 CC module/event/subsystems/iscsi/iscsi.o 00:02:57.067 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:57.067 LIB libspdk_event_iscsi.a 00:02:57.067 LIB libspdk_event_vhost_scsi.a 00:02:57.644 CXX app/trace/trace.o 00:02:57.644 CC app/spdk_nvme_identify/identify.o 00:02:57.644 CC app/spdk_top/spdk_top.o 00:02:57.644 CC app/trace_record/trace_record.o 00:02:57.644 CC app/spdk_nvme_perf/perf.o 00:02:57.644 CC app/spdk_lspci/spdk_lspci.o 00:02:57.644 CC app/spdk_nvme_discover/discovery_aer.o 00:02:57.644 TEST_HEADER include/spdk/accel_module.h 00:02:57.644 TEST_HEADER include/spdk/accel.h 00:02:57.644 TEST_HEADER include/spdk/assert.h 00:02:57.644 TEST_HEADER include/spdk/barrier.h 00:02:57.644 TEST_HEADER include/spdk/base64.h 00:02:57.644 CC test/rpc_client/rpc_client_test.o 00:02:57.644 TEST_HEADER include/spdk/bdev.h 00:02:57.644 TEST_HEADER include/spdk/bdev_module.h 00:02:57.644 TEST_HEADER include/spdk/bdev_zone.h 00:02:57.644 TEST_HEADER include/spdk/bit_array.h 00:02:57.644 TEST_HEADER include/spdk/bit_pool.h 00:02:57.644 TEST_HEADER include/spdk/blob_bdev.h 00:02:57.644 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:57.644 TEST_HEADER include/spdk/blobfs.h 00:02:57.644 TEST_HEADER include/spdk/blob.h 00:02:57.644 TEST_HEADER include/spdk/conf.h 00:02:57.644 TEST_HEADER include/spdk/config.h 00:02:57.644 TEST_HEADER include/spdk/cpuset.h 00:02:57.644 TEST_HEADER include/spdk/crc16.h 00:02:57.644 TEST_HEADER include/spdk/crc32.h 00:02:57.644 TEST_HEADER include/spdk/crc64.h 00:02:57.644 TEST_HEADER include/spdk/dif.h 00:02:57.644 TEST_HEADER include/spdk/dma.h 00:02:57.644 CC app/spdk_dd/spdk_dd.o 00:02:57.644 TEST_HEADER include/spdk/endian.h 00:02:57.644 TEST_HEADER include/spdk/env_dpdk.h 00:02:57.645 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:57.645 TEST_HEADER include/spdk/env.h 00:02:57.645 TEST_HEADER include/spdk/event.h 00:02:57.645 CC app/iscsi_tgt/iscsi_tgt.o 00:02:57.645 CC app/nvmf_tgt/nvmf_main.o 00:02:57.645 TEST_HEADER include/spdk/fd_group.h 00:02:57.645 CC app/vhost/vhost.o 00:02:57.645 TEST_HEADER include/spdk/fd.h 00:02:57.645 TEST_HEADER include/spdk/file.h 00:02:57.645 TEST_HEADER include/spdk/ftl.h 00:02:57.645 TEST_HEADER include/spdk/gpt_spec.h 00:02:57.645 TEST_HEADER include/spdk/hexlify.h 00:02:57.645 CC examples/ioat/perf/perf.o 00:02:57.645 TEST_HEADER include/spdk/histogram_data.h 00:02:57.645 CC examples/ioat/verify/verify.o 00:02:57.645 TEST_HEADER include/spdk/idxd.h 00:02:57.645 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:57.645 CC test/app/stub/stub.o 00:02:57.645 CC examples/accel/perf/accel_perf.o 00:02:57.645 TEST_HEADER include/spdk/idxd_spec.h 00:02:57.645 CC test/event/event_perf/event_perf.o 00:02:57.645 CC examples/vmd/led/led.o 00:02:57.645 TEST_HEADER include/spdk/init.h 00:02:57.645 CC examples/nvme/arbitration/arbitration.o 00:02:57.645 CC test/thread/lock/spdk_lock.o 00:02:57.645 CC examples/sock/hello_world/hello_sock.o 00:02:57.645 CC examples/nvme/hotplug/hotplug.o 00:02:57.645 TEST_HEADER include/spdk/ioat.h 00:02:57.645 CC test/nvme/reserve/reserve.o 00:02:57.645 CC test/nvme/reset/reset.o 00:02:57.645 CC examples/nvme/hello_world/hello_world.o 00:02:57.645 CC test/nvme/aer/aer.o 00:02:57.645 CC test/nvme/startup/startup.o 00:02:57.645 CC test/nvme/connect_stress/connect_stress.o 00:02:57.645 CC test/app/jsoncat/jsoncat.o 00:02:57.645 TEST_HEADER include/spdk/ioat_spec.h 00:02:57.645 CC test/event/reactor/reactor.o 00:02:57.645 CC examples/nvme/reconnect/reconnect.o 00:02:57.645 CC test/env/vtophys/vtophys.o 00:02:57.645 CC app/spdk_tgt/spdk_tgt.o 00:02:57.645 TEST_HEADER include/spdk/iscsi_spec.h 00:02:57.645 CC examples/nvme/abort/abort.o 00:02:57.645 CC test/nvme/boot_partition/boot_partition.o 00:02:57.645 CC test/nvme/e2edp/nvme_dp.o 00:02:57.645 CC test/nvme/simple_copy/simple_copy.o 00:02:57.645 TEST_HEADER include/spdk/json.h 00:02:57.645 CC test/thread/poller_perf/poller_perf.o 00:02:57.645 CC test/app/histogram_perf/histogram_perf.o 00:02:57.645 CC examples/vmd/lsvmd/lsvmd.o 00:02:57.645 CC test/nvme/sgl/sgl.o 00:02:57.645 CC test/nvme/overhead/overhead.o 00:02:57.645 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:57.645 CC test/event/reactor_perf/reactor_perf.o 00:02:57.645 TEST_HEADER include/spdk/jsonrpc.h 00:02:57.645 CC test/nvme/err_injection/err_injection.o 00:02:57.645 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.645 CC examples/util/zipf/zipf.o 00:02:57.645 CC examples/idxd/perf/perf.o 00:02:57.645 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:57.645 CC test/nvme/compliance/nvme_compliance.o 00:02:57.645 CC app/fio/nvme/fio_plugin.o 00:02:57.645 TEST_HEADER include/spdk/likely.h 00:02:57.645 CC test/nvme/fused_ordering/fused_ordering.o 00:02:57.645 TEST_HEADER include/spdk/log.h 00:02:57.645 TEST_HEADER include/spdk/lvol.h 00:02:57.645 TEST_HEADER include/spdk/memory.h 00:02:57.645 TEST_HEADER include/spdk/mmio.h 00:02:57.645 TEST_HEADER include/spdk/nbd.h 00:02:57.645 CC test/event/app_repeat/app_repeat.o 00:02:57.645 TEST_HEADER include/spdk/notify.h 00:02:57.645 TEST_HEADER include/spdk/nvme.h 00:02:57.645 CC examples/blob/cli/blobcli.o 00:02:57.645 CC test/blobfs/mkfs/mkfs.o 00:02:57.645 TEST_HEADER include/spdk/nvme_intel.h 00:02:57.645 CC test/event/scheduler/scheduler.o 00:02:57.645 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:57.645 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:57.645 CC test/app/bdev_svc/bdev_svc.o 00:02:57.645 TEST_HEADER include/spdk/nvme_spec.h 00:02:57.645 CC test/dma/test_dma/test_dma.o 00:02:57.645 CC test/accel/dif/dif.o 00:02:57.645 TEST_HEADER include/spdk/nvme_zns.h 00:02:57.645 CC examples/nvmf/nvmf/nvmf.o 00:02:57.645 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:57.645 CC examples/blob/hello_world/hello_blob.o 00:02:57.645 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:57.645 CC examples/thread/thread/thread_ex.o 00:02:57.645 TEST_HEADER include/spdk/nvmf.h 00:02:57.645 LINK spdk_lspci 00:02:57.645 TEST_HEADER include/spdk/nvmf_spec.h 00:02:57.645 TEST_HEADER include/spdk/nvmf_transport.h 00:02:57.645 CC examples/bdev/bdevperf/bdevperf.o 00:02:57.645 CC examples/bdev/hello_world/hello_bdev.o 00:02:57.645 TEST_HEADER include/spdk/opal.h 00:02:57.645 CC app/fio/bdev/fio_plugin.o 00:02:57.645 TEST_HEADER include/spdk/opal_spec.h 00:02:57.645 CC test/bdev/bdevio/bdevio.o 00:02:57.905 TEST_HEADER include/spdk/pci_ids.h 00:02:57.905 TEST_HEADER include/spdk/pipe.h 00:02:57.905 TEST_HEADER include/spdk/queue.h 00:02:57.905 TEST_HEADER include/spdk/reduce.h 00:02:57.905 TEST_HEADER include/spdk/rpc.h 00:02:57.905 TEST_HEADER include/spdk/scheduler.h 00:02:57.905 CC test/env/mem_callbacks/mem_callbacks.o 00:02:57.905 TEST_HEADER include/spdk/scsi.h 00:02:57.905 TEST_HEADER include/spdk/scsi_spec.h 00:02:57.905 TEST_HEADER include/spdk/sock.h 00:02:57.905 TEST_HEADER include/spdk/stdinc.h 00:02:57.905 TEST_HEADER include/spdk/string.h 00:02:57.905 LINK rpc_client_test 00:02:57.905 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:57.905 LINK spdk_nvme_discover 00:02:57.905 TEST_HEADER include/spdk/thread.h 00:02:57.905 CC test/lvol/esnap/esnap.o 00:02:57.905 TEST_HEADER include/spdk/trace.h 00:02:57.905 TEST_HEADER include/spdk/trace_parser.h 00:02:57.905 TEST_HEADER include/spdk/tree.h 00:02:57.905 TEST_HEADER include/spdk/ublk.h 00:02:57.905 TEST_HEADER include/spdk/util.h 00:02:57.905 TEST_HEADER include/spdk/uuid.h 00:02:57.905 TEST_HEADER include/spdk/version.h 00:02:57.905 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:57.905 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:57.905 TEST_HEADER include/spdk/vhost.h 00:02:57.905 TEST_HEADER include/spdk/vmd.h 00:02:57.905 TEST_HEADER include/spdk/xor.h 00:02:57.905 TEST_HEADER include/spdk/zipf.h 00:02:57.905 CXX test/cpp_headers/accel.o 00:02:57.905 LINK spdk_trace_record 00:02:57.905 LINK reactor_perf 00:02:57.905 LINK led 00:02:57.905 LINK jsoncat 00:02:57.905 LINK event_perf 00:02:57.905 LINK reactor 00:02:57.905 LINK lsvmd 00:02:57.905 LINK vtophys 00:02:57.905 LINK interrupt_tgt 00:02:57.905 LINK nvmf_tgt 00:02:57.905 LINK poller_perf 00:02:57.905 LINK histogram_perf 00:02:57.905 LINK ioat_perf 00:02:57.905 LINK env_dpdk_post_init 00:02:57.905 LINK zipf 00:02:57.905 LINK vhost 00:02:57.905 LINK stub 00:02:57.905 LINK startup 00:02:57.905 LINK hello_sock 00:02:57.905 LINK connect_stress 00:02:57.905 LINK iscsi_tgt 00:02:57.905 LINK boot_partition 00:02:57.905 LINK fused_ordering 00:02:57.905 LINK app_repeat 00:02:57.905 LINK pmr_persistence 00:02:57.905 LINK err_injection 00:02:57.905 LINK verify 00:02:57.905 LINK reserve 00:02:57.905 fio_plugin.c:1491:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:02:57.905 struct spdk_nvme_fdp_ruhs ruhs; 00:02:57.905 ^ 00:02:57.905 LINK cmb_copy 00:02:57.905 LINK scheduler 00:02:58.167 LINK hotplug 00:02:58.167 LINK bdev_svc 00:02:58.167 LINK hello_world 00:02:58.167 LINK spdk_trace 00:02:58.167 LINK simple_copy 00:02:58.167 LINK spdk_tgt 00:02:58.167 LINK mkfs 00:02:58.167 LINK aer 00:02:58.167 LINK reset 00:02:58.167 LINK sgl 00:02:58.167 LINK hello_blob 00:02:58.167 LINK overhead 00:02:58.167 LINK abort 00:02:58.167 LINK nvme_dp 00:02:58.167 LINK thread 00:02:58.167 CXX test/cpp_headers/accel_module.o 00:02:58.167 LINK hello_bdev 00:02:58.167 LINK nvme_manage 00:02:58.167 LINK nvmf 00:02:58.167 LINK idxd_perf 00:02:58.167 LINK arbitration 00:02:58.167 LINK reconnect 00:02:58.430 LINK spdk_dd 00:02:58.430 LINK test_dma 00:02:58.430 LINK dif 00:02:58.430 LINK accel_perf 00:02:58.430 LINK bdevio 00:02:58.430 LINK nvme_compliance 00:02:58.430 CXX test/cpp_headers/assert.o 00:02:58.430 CXX test/cpp_headers/barrier.o 00:02:58.430 LINK nvme_fuzz 00:02:58.430 LINK blobcli 00:02:58.430 1 warning generated. 00:02:58.689 CXX test/cpp_headers/base64.o 00:02:58.689 LINK spdk_nvme_identify 00:02:58.689 LINK spdk_nvme 00:02:58.689 LINK mem_callbacks 00:02:58.689 LINK spdk_bdev 00:02:58.689 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:58.689 CC test/env/memory/memory_ut.o 00:02:58.689 CXX test/cpp_headers/bdev.o 00:02:58.689 CXX test/cpp_headers/bdev_module.o 00:02:58.689 CXX test/cpp_headers/bdev_zone.o 00:02:58.689 CXX test/cpp_headers/bit_array.o 00:02:58.689 LINK spdk_nvme_perf 00:02:58.951 CC test/nvme/fdp/fdp.o 00:02:58.951 CXX test/cpp_headers/bit_pool.o 00:02:58.951 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:58.951 CC test/env/pci/pci_ut.o 00:02:58.951 CC test/nvme/cuse/cuse.o 00:02:58.951 CXX test/cpp_headers/blob_bdev.o 00:02:58.951 LINK bdevperf 00:02:58.951 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:58.951 LINK spdk_top 00:02:58.951 LINK doorbell_aers 00:02:58.951 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:58.951 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:58.951 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:58.951 CXX test/cpp_headers/blobfs_bdev.o 00:02:59.210 CXX test/cpp_headers/blobfs.o 00:02:59.210 CXX test/cpp_headers/blob.o 00:02:59.210 CXX test/cpp_headers/conf.o 00:02:59.210 CXX test/cpp_headers/config.o 00:02:59.210 CXX test/cpp_headers/cpuset.o 00:02:59.210 CXX test/cpp_headers/crc16.o 00:02:59.210 CXX test/cpp_headers/crc32.o 00:02:59.210 CXX test/cpp_headers/crc64.o 00:02:59.210 CXX test/cpp_headers/dif.o 00:02:59.210 CXX test/cpp_headers/dma.o 00:02:59.210 CXX test/cpp_headers/endian.o 00:02:59.210 LINK fdp 00:02:59.210 CXX test/cpp_headers/env_dpdk.o 00:02:59.210 CXX test/cpp_headers/env.o 00:02:59.210 CXX test/cpp_headers/event.o 00:02:59.477 CXX test/cpp_headers/fd_group.o 00:02:59.477 CXX test/cpp_headers/fd.o 00:02:59.477 CXX test/cpp_headers/file.o 00:02:59.477 CXX test/cpp_headers/ftl.o 00:02:59.477 CXX test/cpp_headers/gpt_spec.o 00:02:59.477 CXX test/cpp_headers/hexlify.o 00:02:59.477 LINK llvm_vfio_fuzz 00:02:59.477 CXX test/cpp_headers/histogram_data.o 00:02:59.477 CXX test/cpp_headers/idxd.o 00:02:59.477 CXX test/cpp_headers/idxd_spec.o 00:02:59.477 CXX test/cpp_headers/init.o 00:02:59.477 CXX test/cpp_headers/ioat.o 00:02:59.477 CXX test/cpp_headers/ioat_spec.o 00:02:59.477 CXX test/cpp_headers/iscsi_spec.o 00:02:59.477 CXX test/cpp_headers/json.o 00:02:59.477 CXX test/cpp_headers/jsonrpc.o 00:02:59.477 CXX test/cpp_headers/likely.o 00:02:59.477 LINK pci_ut 00:02:59.477 CXX test/cpp_headers/log.o 00:02:59.477 CXX test/cpp_headers/lvol.o 00:02:59.477 CXX test/cpp_headers/memory.o 00:02:59.477 CXX test/cpp_headers/mmio.o 00:02:59.477 CXX test/cpp_headers/nbd.o 00:02:59.477 CXX test/cpp_headers/notify.o 00:02:59.477 CXX test/cpp_headers/nvme.o 00:02:59.477 CXX test/cpp_headers/nvme_intel.o 00:02:59.477 CXX test/cpp_headers/nvme_ocssd.o 00:02:59.477 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:59.477 CXX test/cpp_headers/nvme_spec.o 00:02:59.477 CXX test/cpp_headers/nvme_zns.o 00:02:59.477 CXX test/cpp_headers/nvmf_cmd.o 00:02:59.477 LINK vhost_fuzz 00:02:59.477 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:59.477 CXX test/cpp_headers/nvmf.o 00:02:59.477 CXX test/cpp_headers/nvmf_spec.o 00:02:59.477 CXX test/cpp_headers/nvmf_transport.o 00:02:59.737 CXX test/cpp_headers/opal.o 00:02:59.737 CXX test/cpp_headers/pci_ids.o 00:02:59.737 CXX test/cpp_headers/opal_spec.o 00:02:59.737 CXX test/cpp_headers/pipe.o 00:02:59.737 CXX test/cpp_headers/queue.o 00:02:59.737 CXX test/cpp_headers/reduce.o 00:02:59.737 CXX test/cpp_headers/rpc.o 00:02:59.737 CXX test/cpp_headers/scheduler.o 00:02:59.737 CXX test/cpp_headers/scsi.o 00:02:59.737 CXX test/cpp_headers/scsi_spec.o 00:02:59.737 CXX test/cpp_headers/sock.o 00:02:59.737 CXX test/cpp_headers/stdinc.o 00:02:59.737 CXX test/cpp_headers/string.o 00:02:59.737 CXX test/cpp_headers/trace.o 00:02:59.737 CXX test/cpp_headers/thread.o 00:02:59.737 CXX test/cpp_headers/trace_parser.o 00:02:59.737 CXX test/cpp_headers/ublk.o 00:02:59.737 CXX test/cpp_headers/tree.o 00:02:59.737 CXX test/cpp_headers/util.o 00:02:59.737 CXX test/cpp_headers/uuid.o 00:02:59.737 CXX test/cpp_headers/version.o 00:02:59.737 CXX test/cpp_headers/vfio_user_pci.o 00:02:59.737 CXX test/cpp_headers/vfio_user_spec.o 00:02:59.737 CXX test/cpp_headers/vhost.o 00:02:59.737 CXX test/cpp_headers/vmd.o 00:02:59.737 CXX test/cpp_headers/xor.o 00:02:59.737 CXX test/cpp_headers/zipf.o 00:02:59.996 LINK llvm_nvme_fuzz 00:02:59.996 LINK memory_ut 00:02:59.996 LINK spdk_lock 00:03:00.255 LINK cuse 00:03:00.514 LINK iscsi_fuzz 00:03:03.049 LINK esnap 00:03:03.618 00:03:03.618 real 0m50.693s 00:03:03.618 user 7m54.816s 00:03:03.618 sys 2m51.251s 00:03:03.618 13:44:54 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:03.618 13:44:54 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.618 ************************************ 00:03:03.618 END TEST make 00:03:03.618 ************************************ 00:03:03.618 13:44:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:03:03.618 13:44:54 -- nvmf/common.sh@7 -- # uname -s 00:03:03.618 13:44:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:03.618 13:44:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:03.618 13:44:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:03.618 13:44:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:03.618 13:44:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:03.618 13:44:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:03.618 13:44:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:03.618 13:44:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:03.618 13:44:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:03.618 13:44:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:03.618 13:44:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:03:03.618 13:44:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:03:03.618 13:44:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:03.618 13:44:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:03.618 13:44:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:03.618 13:44:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:03.618 13:44:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:03.618 13:44:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:03.618 13:44:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:03.618 13:44:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.618 13:44:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.618 13:44:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.618 13:44:54 -- paths/export.sh@5 -- # export PATH 00:03:03.618 13:44:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.618 13:44:54 -- nvmf/common.sh@46 -- # : 0 00:03:03.618 13:44:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:03.618 13:44:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:03.618 13:44:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:03.618 13:44:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:03.618 13:44:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:03.618 13:44:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:03.618 13:44:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:03.618 13:44:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:03.618 13:44:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:03.618 13:44:54 -- spdk/autotest.sh@32 -- # uname -s 00:03:03.618 13:44:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:03.618 13:44:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:03.618 13:44:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:03.618 13:44:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:03.618 13:44:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:03:03.618 13:44:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:03.618 13:44:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:03.618 13:44:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:03.618 13:44:54 -- spdk/autotest.sh@48 -- # udevadm_pid=3834638 00:03:03.618 13:44:54 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:03:03.618 13:44:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:03.618 13:44:54 -- spdk/autotest.sh@54 -- # echo 3834640 00:03:03.618 13:44:54 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:03:03.618 13:44:54 -- spdk/autotest.sh@56 -- # echo 3834641 00:03:03.618 13:44:54 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:03.618 13:44:54 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:03:03.618 13:44:54 -- spdk/autotest.sh@60 -- # echo 3834642 00:03:03.618 13:44:54 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:03:03.618 13:44:54 -- spdk/autotest.sh@62 -- # echo 3834643 00:03:03.618 13:44:54 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:03.618 13:44:54 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:03:03.618 13:44:54 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:03.618 13:44:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:03.618 13:44:54 -- common/autotest_common.sh@10 -- # set +x 00:03:03.618 13:44:54 -- spdk/autotest.sh@70 -- # create_test_list 00:03:03.618 13:44:54 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:03.618 13:44:54 -- common/autotest_common.sh@10 -- # set +x 00:03:03.618 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:03.618 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:03.618 13:44:54 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:03:03.878 13:44:54 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:03.878 13:44:54 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:03.878 13:44:54 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:03.878 13:44:54 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:03.878 13:44:54 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:03.878 13:44:54 -- common/autotest_common.sh@1440 -- # uname 00:03:03.878 13:44:54 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:03.878 13:44:54 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:03.878 13:44:54 -- common/autotest_common.sh@1460 -- # uname 00:03:03.878 13:44:54 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:03.878 13:44:54 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:03.878 13:44:54 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=clang 00:03:03.878 13:44:54 -- spdk/autotest.sh@83 -- # hash lcov 00:03:03.878 13:44:54 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:03:03.878 13:44:54 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:03.878 13:44:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:03.878 13:44:54 -- common/autotest_common.sh@10 -- # set +x 00:03:03.878 13:44:54 -- spdk/autotest.sh@102 -- # rm -f 00:03:03.878 13:44:54 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.070 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:03:08.070 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:08.070 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:08.329 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:08.329 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:08.329 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:08.329 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:08.329 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:10.234 13:45:01 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:10.234 13:45:01 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:10.234 13:45:01 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:10.234 13:45:01 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:10.234 13:45:01 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:10.234 13:45:01 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:10.234 13:45:01 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:10.234 13:45:01 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:10.234 13:45:01 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:10.234 13:45:01 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:10.234 13:45:01 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:10.234 13:45:01 -- spdk/autotest.sh@121 -- # grep -v p 00:03:10.234 13:45:01 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:10.234 13:45:01 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:10.234 13:45:01 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:10.234 13:45:01 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:10.234 13:45:01 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:10.234 No valid GPT data, bailing 00:03:10.493 13:45:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:10.494 13:45:01 -- scripts/common.sh@393 -- # pt= 00:03:10.494 13:45:01 -- scripts/common.sh@394 -- # return 1 00:03:10.494 13:45:01 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:10.494 1+0 records in 00:03:10.494 1+0 records out 00:03:10.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00234828 s, 447 MB/s 00:03:10.494 13:45:01 -- spdk/autotest.sh@129 -- # sync 00:03:10.494 13:45:01 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:10.494 13:45:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:10.494 13:45:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:15.767 13:45:06 -- spdk/autotest.sh@135 -- # uname -s 00:03:15.767 13:45:06 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:15.767 13:45:06 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:15.767 13:45:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:15.767 13:45:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.767 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:03:15.767 ************************************ 00:03:15.767 START TEST setup.sh 00:03:15.767 ************************************ 00:03:15.767 13:45:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:15.767 * Looking for test storage... 00:03:15.767 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:15.767 13:45:06 -- setup/test-setup.sh@10 -- # uname -s 00:03:15.767 13:45:06 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:15.767 13:45:06 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:15.767 13:45:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:15.767 13:45:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.767 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:03:15.767 ************************************ 00:03:15.767 START TEST acl 00:03:15.767 ************************************ 00:03:15.767 13:45:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:15.767 * Looking for test storage... 00:03:15.767 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:15.767 13:45:06 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:15.767 13:45:06 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:15.767 13:45:06 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:15.767 13:45:06 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:15.767 13:45:06 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:15.767 13:45:06 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:15.768 13:45:06 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:15.768 13:45:06 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:15.768 13:45:06 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:15.768 13:45:06 -- setup/acl.sh@12 -- # devs=() 00:03:15.768 13:45:06 -- setup/acl.sh@12 -- # declare -a devs 00:03:15.768 13:45:06 -- setup/acl.sh@13 -- # drivers=() 00:03:15.768 13:45:06 -- setup/acl.sh@13 -- # declare -A drivers 00:03:15.768 13:45:06 -- setup/acl.sh@51 -- # setup reset 00:03:15.768 13:45:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.768 13:45:06 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.353 13:45:12 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:22.353 13:45:12 -- setup/acl.sh@16 -- # local dev driver 00:03:22.353 13:45:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.353 13:45:12 -- setup/acl.sh@15 -- # setup output status 00:03:22.353 13:45:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.353 13:45:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:26.596 Hugepages 00:03:26.596 node hugesize free / total 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 00:03:26.596 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.596 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.596 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:03:26.596 13:45:16 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:26.596 13:45:16 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:03:26.597 13:45:16 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:26.597 13:45:16 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:26.597 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.597 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.597 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.597 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.597 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.597 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.597 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.597 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.597 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.597 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.597 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.597 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.597 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.597 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.597 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.597 13:45:16 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:26.597 13:45:16 -- setup/acl.sh@20 -- # continue 00:03:26.597 13:45:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:26.597 13:45:16 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:26.597 13:45:16 -- setup/acl.sh@54 -- # run_test denied denied 00:03:26.597 13:45:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:26.597 13:45:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:26.597 13:45:16 -- common/autotest_common.sh@10 -- # set +x 00:03:26.597 ************************************ 00:03:26.597 START TEST denied 00:03:26.597 ************************************ 00:03:26.597 13:45:16 -- common/autotest_common.sh@1104 -- # denied 00:03:26.597 13:45:16 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:03:26.597 13:45:16 -- setup/acl.sh@38 -- # setup output config 00:03:26.597 13:45:16 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:03:26.597 13:45:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.597 13:45:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:33.169 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:03:33.169 13:45:23 -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:03:33.169 13:45:23 -- setup/acl.sh@28 -- # local dev driver 00:03:33.169 13:45:23 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:33.169 13:45:23 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:03:33.169 13:45:23 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:03:33.169 13:45:23 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:33.169 13:45:23 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:33.169 13:45:23 -- setup/acl.sh@41 -- # setup reset 00:03:33.169 13:45:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:33.169 13:45:23 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.295 00:03:41.295 real 0m13.817s 00:03:41.295 user 0m4.314s 00:03:41.295 sys 0m8.701s 00:03:41.295 13:45:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.295 13:45:30 -- common/autotest_common.sh@10 -- # set +x 00:03:41.295 ************************************ 00:03:41.295 END TEST denied 00:03:41.295 ************************************ 00:03:41.295 13:45:30 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:41.295 13:45:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.295 13:45:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.295 13:45:30 -- common/autotest_common.sh@10 -- # set +x 00:03:41.295 ************************************ 00:03:41.295 START TEST allowed 00:03:41.295 ************************************ 00:03:41.295 13:45:30 -- common/autotest_common.sh@1104 -- # allowed 00:03:41.295 13:45:30 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:03:41.295 13:45:30 -- setup/acl.sh@45 -- # setup output config 00:03:41.295 13:45:30 -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:03:41.295 13:45:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.295 13:45:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:49.421 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:03:49.421 13:45:40 -- setup/acl.sh@47 -- # verify 00:03:49.421 13:45:40 -- setup/acl.sh@28 -- # local dev driver 00:03:49.421 13:45:40 -- setup/acl.sh@48 -- # setup reset 00:03:49.421 13:45:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.421 13:45:40 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.995 00:03:55.995 real 0m15.756s 00:03:55.995 user 0m4.149s 00:03:55.995 sys 0m8.476s 00:03:55.995 13:45:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.995 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:03:55.995 ************************************ 00:03:55.995 END TEST allowed 00:03:55.995 ************************************ 00:03:55.995 00:03:55.995 real 0m40.166s 00:03:55.995 user 0m11.999s 00:03:55.995 sys 0m24.512s 00:03:55.995 13:45:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.995 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:03:55.995 ************************************ 00:03:55.995 END TEST acl 00:03:55.995 ************************************ 00:03:55.995 13:45:46 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:55.995 13:45:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:55.995 13:45:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:55.995 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:03:55.995 ************************************ 00:03:55.995 START TEST hugepages 00:03:55.995 ************************************ 00:03:55.995 13:45:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:55.995 * Looking for test storage... 00:03:55.995 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:55.995 13:45:46 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:55.995 13:45:46 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:55.995 13:45:46 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:55.995 13:45:46 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:55.995 13:45:46 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:55.995 13:45:46 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:55.995 13:45:46 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:55.995 13:45:46 -- setup/common.sh@18 -- # local node= 00:03:55.995 13:45:46 -- setup/common.sh@19 -- # local var val 00:03:55.995 13:45:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.995 13:45:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.995 13:45:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.995 13:45:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.995 13:45:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.995 13:45:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.995 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.995 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.995 13:45:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 68042612 kB' 'MemAvailable: 71999648 kB' 'Buffers: 9896 kB' 'Cached: 16493232 kB' 'SwapCached: 0 kB' 'Active: 13280136 kB' 'Inactive: 3731648 kB' 'Active(anon): 12723928 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512060 kB' 'Mapped: 209972 kB' 'Shmem: 12215272 kB' 'KReclaimable: 505364 kB' 'Slab: 922564 kB' 'SReclaimable: 505364 kB' 'SUnreclaim: 417200 kB' 'KernelStack: 16224 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438216 kB' 'Committed_AS: 14096356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214120 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:03:55.995 13:45:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.996 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.996 13:45:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # continue 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.997 13:45:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.997 13:45:46 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:55.997 13:45:46 -- setup/common.sh@33 -- # echo 2048 00:03:55.997 13:45:46 -- setup/common.sh@33 -- # return 0 00:03:55.997 13:45:46 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:55.997 13:45:46 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:55.997 13:45:46 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:55.997 13:45:46 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:55.997 13:45:46 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:55.998 13:45:46 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:55.998 13:45:46 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:55.998 13:45:46 -- setup/hugepages.sh@207 -- # get_nodes 00:03:55.998 13:45:46 -- setup/hugepages.sh@27 -- # local node 00:03:55.998 13:45:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.998 13:45:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:55.998 13:45:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.998 13:45:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:55.998 13:45:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.998 13:45:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.998 13:45:46 -- setup/hugepages.sh@208 -- # clear_hp 00:03:55.998 13:45:46 -- setup/hugepages.sh@37 -- # local node hp 00:03:55.998 13:45:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.998 13:45:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.998 13:45:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:55.998 13:45:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.998 13:45:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:55.998 13:45:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.998 13:45:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.998 13:45:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:55.998 13:45:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.998 13:45:46 -- setup/hugepages.sh@41 -- # echo 0 00:03:55.998 13:45:46 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:55.998 13:45:46 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:55.998 13:45:46 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:55.998 13:45:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:55.998 13:45:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:55.998 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:03:55.998 ************************************ 00:03:55.998 START TEST default_setup 00:03:55.998 ************************************ 00:03:55.998 13:45:46 -- common/autotest_common.sh@1104 -- # default_setup 00:03:55.998 13:45:46 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:55.998 13:45:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.998 13:45:46 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:55.998 13:45:46 -- setup/hugepages.sh@51 -- # shift 00:03:55.998 13:45:46 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:55.998 13:45:46 -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.998 13:45:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.998 13:45:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.998 13:45:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:55.998 13:45:46 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:55.998 13:45:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.998 13:45:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.998 13:45:46 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.998 13:45:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.998 13:45:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.998 13:45:46 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:55.998 13:45:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.998 13:45:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:55.998 13:45:46 -- setup/hugepages.sh@73 -- # return 0 00:03:55.998 13:45:46 -- setup/hugepages.sh@137 -- # setup output 00:03:55.998 13:45:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.998 13:45:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:00.192 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:00.192 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:03.550 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:04:05.459 13:45:56 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:05.459 13:45:56 -- setup/hugepages.sh@89 -- # local node 00:04:05.459 13:45:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.459 13:45:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.459 13:45:56 -- setup/hugepages.sh@92 -- # local surp 00:04:05.459 13:45:56 -- setup/hugepages.sh@93 -- # local resv 00:04:05.459 13:45:56 -- setup/hugepages.sh@94 -- # local anon 00:04:05.459 13:45:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.459 13:45:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.459 13:45:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.459 13:45:56 -- setup/common.sh@18 -- # local node= 00:04:05.459 13:45:56 -- setup/common.sh@19 -- # local var val 00:04:05.459 13:45:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.459 13:45:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.459 13:45:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.459 13:45:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.459 13:45:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.459 13:45:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.459 13:45:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70243112 kB' 'MemAvailable: 74200084 kB' 'Buffers: 9896 kB' 'Cached: 16493412 kB' 'SwapCached: 0 kB' 'Active: 13291396 kB' 'Inactive: 3731648 kB' 'Active(anon): 12735188 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522668 kB' 'Mapped: 209940 kB' 'Shmem: 12215452 kB' 'KReclaimable: 505300 kB' 'Slab: 920876 kB' 'SReclaimable: 505300 kB' 'SUnreclaim: 415576 kB' 'KernelStack: 16272 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14113400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214104 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.459 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.459 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.460 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.460 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.461 13:45:56 -- setup/common.sh@33 -- # echo 0 00:04:05.461 13:45:56 -- setup/common.sh@33 -- # return 0 00:04:05.461 13:45:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.461 13:45:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.461 13:45:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.461 13:45:56 -- setup/common.sh@18 -- # local node= 00:04:05.461 13:45:56 -- setup/common.sh@19 -- # local var val 00:04:05.461 13:45:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.461 13:45:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.461 13:45:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.461 13:45:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.461 13:45:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.461 13:45:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70243016 kB' 'MemAvailable: 74199988 kB' 'Buffers: 9896 kB' 'Cached: 16493412 kB' 'SwapCached: 0 kB' 'Active: 13290980 kB' 'Inactive: 3731648 kB' 'Active(anon): 12734772 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522668 kB' 'Mapped: 209896 kB' 'Shmem: 12215452 kB' 'KReclaimable: 505300 kB' 'Slab: 920884 kB' 'SReclaimable: 505300 kB' 'SUnreclaim: 415584 kB' 'KernelStack: 16240 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14113412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214088 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.461 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.461 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.462 13:45:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.462 13:45:56 -- setup/common.sh@33 -- # echo 0 00:04:05.462 13:45:56 -- setup/common.sh@33 -- # return 0 00:04:05.462 13:45:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.462 13:45:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.462 13:45:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.462 13:45:56 -- setup/common.sh@18 -- # local node= 00:04:05.462 13:45:56 -- setup/common.sh@19 -- # local var val 00:04:05.462 13:45:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.462 13:45:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.462 13:45:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.462 13:45:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.462 13:45:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.462 13:45:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.462 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70243608 kB' 'MemAvailable: 74200580 kB' 'Buffers: 9896 kB' 'Cached: 16493412 kB' 'SwapCached: 0 kB' 'Active: 13291312 kB' 'Inactive: 3731648 kB' 'Active(anon): 12735104 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523076 kB' 'Mapped: 209896 kB' 'Shmem: 12215452 kB' 'KReclaimable: 505300 kB' 'Slab: 920884 kB' 'SReclaimable: 505300 kB' 'SUnreclaim: 415584 kB' 'KernelStack: 16256 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14113428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214088 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.463 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.463 13:45:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.464 13:45:56 -- setup/common.sh@33 -- # echo 0 00:04:05.464 13:45:56 -- setup/common.sh@33 -- # return 0 00:04:05.464 13:45:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.464 13:45:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.464 nr_hugepages=1024 00:04:05.464 13:45:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.464 resv_hugepages=0 00:04:05.464 13:45:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.464 surplus_hugepages=0 00:04:05.464 13:45:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.464 anon_hugepages=0 00:04:05.464 13:45:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.464 13:45:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.464 13:45:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.464 13:45:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.464 13:45:56 -- setup/common.sh@18 -- # local node= 00:04:05.464 13:45:56 -- setup/common.sh@19 -- # local var val 00:04:05.464 13:45:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.464 13:45:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.464 13:45:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.464 13:45:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.464 13:45:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.464 13:45:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70244636 kB' 'MemAvailable: 74201608 kB' 'Buffers: 9896 kB' 'Cached: 16493440 kB' 'SwapCached: 0 kB' 'Active: 13290976 kB' 'Inactive: 3731648 kB' 'Active(anon): 12734768 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522668 kB' 'Mapped: 209896 kB' 'Shmem: 12215480 kB' 'KReclaimable: 505300 kB' 'Slab: 920872 kB' 'SReclaimable: 505300 kB' 'SUnreclaim: 415572 kB' 'KernelStack: 16240 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14113440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214088 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.464 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.464 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.465 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.465 13:45:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.466 13:45:56 -- setup/common.sh@33 -- # echo 1024 00:04:05.466 13:45:56 -- setup/common.sh@33 -- # return 0 00:04:05.466 13:45:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.466 13:45:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.466 13:45:56 -- setup/hugepages.sh@27 -- # local node 00:04:05.466 13:45:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.466 13:45:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.466 13:45:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.466 13:45:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.466 13:45:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.466 13:45:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.466 13:45:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.466 13:45:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.466 13:45:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.466 13:45:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.466 13:45:56 -- setup/common.sh@18 -- # local node=0 00:04:05.466 13:45:56 -- setup/common.sh@19 -- # local var val 00:04:05.466 13:45:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.466 13:45:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.466 13:45:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.466 13:45:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.466 13:45:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.466 13:45:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 39903604 kB' 'MemUsed: 8213364 kB' 'SwapCached: 0 kB' 'Active: 3921604 kB' 'Inactive: 285832 kB' 'Active(anon): 3503952 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 285832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3903160 kB' 'Mapped: 120908 kB' 'AnonPages: 307468 kB' 'Shmem: 3199676 kB' 'KernelStack: 8232 kB' 'PageTables: 5652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346388 kB' 'Slab: 580432 kB' 'SReclaimable: 346388 kB' 'SUnreclaim: 234044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.466 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.466 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.467 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.467 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.468 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:45:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.468 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:45:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.468 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:45:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.468 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:45:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:45:56 -- setup/common.sh@32 -- # continue 00:04:05.468 13:45:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.468 13:45:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.468 13:45:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.468 13:45:56 -- setup/common.sh@33 -- # echo 0 00:04:05.468 13:45:56 -- setup/common.sh@33 -- # return 0 00:04:05.468 13:45:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.468 13:45:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.468 13:45:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.468 13:45:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.468 13:45:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.468 node0=1024 expecting 1024 00:04:05.468 13:45:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.468 00:04:05.468 real 0m9.412s 00:04:05.468 user 0m2.252s 00:04:05.468 sys 0m4.178s 00:04:05.468 13:45:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.468 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:04:05.468 ************************************ 00:04:05.468 END TEST default_setup 00:04:05.468 ************************************ 00:04:05.468 13:45:56 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:05.468 13:45:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:05.468 13:45:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:05.468 13:45:56 -- common/autotest_common.sh@10 -- # set +x 00:04:05.468 ************************************ 00:04:05.468 START TEST per_node_1G_alloc 00:04:05.468 ************************************ 00:04:05.468 13:45:56 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:05.468 13:45:56 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:05.468 13:45:56 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:05.468 13:45:56 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:05.468 13:45:56 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:05.468 13:45:56 -- setup/hugepages.sh@51 -- # shift 00:04:05.468 13:45:56 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:05.468 13:45:56 -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.468 13:45:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.468 13:45:56 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:05.468 13:45:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:05.468 13:45:56 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:05.468 13:45:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.468 13:45:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.468 13:45:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:05.468 13:45:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.468 13:45:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.468 13:45:56 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:05.468 13:45:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.468 13:45:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:05.468 13:45:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.468 13:45:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:05.468 13:45:56 -- setup/hugepages.sh@73 -- # return 0 00:04:05.468 13:45:56 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:05.468 13:45:56 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:05.468 13:45:56 -- setup/hugepages.sh@146 -- # setup output 00:04:05.468 13:45:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.468 13:45:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:09.664 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.664 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.664 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:11.568 13:46:02 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:11.568 13:46:02 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:11.568 13:46:02 -- setup/hugepages.sh@89 -- # local node 00:04:11.568 13:46:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.568 13:46:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.568 13:46:02 -- setup/hugepages.sh@92 -- # local surp 00:04:11.568 13:46:02 -- setup/hugepages.sh@93 -- # local resv 00:04:11.568 13:46:02 -- setup/hugepages.sh@94 -- # local anon 00:04:11.568 13:46:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.568 13:46:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.568 13:46:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.568 13:46:02 -- setup/common.sh@18 -- # local node= 00:04:11.568 13:46:02 -- setup/common.sh@19 -- # local var val 00:04:11.568 13:46:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.568 13:46:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.568 13:46:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.568 13:46:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.568 13:46:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.568 13:46:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.568 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.568 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70262600 kB' 'MemAvailable: 74219476 kB' 'Buffers: 9896 kB' 'Cached: 16493548 kB' 'SwapCached: 0 kB' 'Active: 13291244 kB' 'Inactive: 3731648 kB' 'Active(anon): 12735036 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522264 kB' 'Mapped: 209124 kB' 'Shmem: 12215588 kB' 'KReclaimable: 505204 kB' 'Slab: 920916 kB' 'SReclaimable: 505204 kB' 'SUnreclaim: 415712 kB' 'KernelStack: 16192 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14107196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214248 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.569 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.569 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.570 13:46:02 -- setup/common.sh@33 -- # echo 0 00:04:11.570 13:46:02 -- setup/common.sh@33 -- # return 0 00:04:11.570 13:46:02 -- setup/hugepages.sh@97 -- # anon=0 00:04:11.570 13:46:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.570 13:46:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.570 13:46:02 -- setup/common.sh@18 -- # local node= 00:04:11.570 13:46:02 -- setup/common.sh@19 -- # local var val 00:04:11.570 13:46:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.570 13:46:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.570 13:46:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.570 13:46:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.570 13:46:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.570 13:46:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70260912 kB' 'MemAvailable: 74217788 kB' 'Buffers: 9896 kB' 'Cached: 16493552 kB' 'SwapCached: 0 kB' 'Active: 13290856 kB' 'Inactive: 3731648 kB' 'Active(anon): 12734648 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522368 kB' 'Mapped: 209100 kB' 'Shmem: 12215592 kB' 'KReclaimable: 505204 kB' 'Slab: 920952 kB' 'SReclaimable: 505204 kB' 'SUnreclaim: 415748 kB' 'KernelStack: 16416 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14107204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214328 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.570 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.570 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.571 13:46:02 -- setup/common.sh@33 -- # echo 0 00:04:11.571 13:46:02 -- setup/common.sh@33 -- # return 0 00:04:11.571 13:46:02 -- setup/hugepages.sh@99 -- # surp=0 00:04:11.571 13:46:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.571 13:46:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.571 13:46:02 -- setup/common.sh@18 -- # local node= 00:04:11.571 13:46:02 -- setup/common.sh@19 -- # local var val 00:04:11.571 13:46:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.571 13:46:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.571 13:46:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.571 13:46:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.571 13:46:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.571 13:46:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70259284 kB' 'MemAvailable: 74216160 kB' 'Buffers: 9896 kB' 'Cached: 16493564 kB' 'SwapCached: 0 kB' 'Active: 13290700 kB' 'Inactive: 3731648 kB' 'Active(anon): 12734492 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522148 kB' 'Mapped: 209100 kB' 'Shmem: 12215604 kB' 'KReclaimable: 505204 kB' 'Slab: 920952 kB' 'SReclaimable: 505204 kB' 'SUnreclaim: 415748 kB' 'KernelStack: 16352 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14107220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214392 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.571 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.571 13:46:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.572 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.572 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.572 13:46:02 -- setup/common.sh@33 -- # echo 0 00:04:11.572 13:46:02 -- setup/common.sh@33 -- # return 0 00:04:11.573 13:46:02 -- setup/hugepages.sh@100 -- # resv=0 00:04:11.573 13:46:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:11.573 nr_hugepages=1024 00:04:11.573 13:46:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.573 resv_hugepages=0 00:04:11.573 13:46:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.573 surplus_hugepages=0 00:04:11.573 13:46:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.573 anon_hugepages=0 00:04:11.573 13:46:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.573 13:46:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:11.573 13:46:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.573 13:46:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.573 13:46:02 -- setup/common.sh@18 -- # local node= 00:04:11.573 13:46:02 -- setup/common.sh@19 -- # local var val 00:04:11.573 13:46:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.573 13:46:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.573 13:46:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.573 13:46:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.573 13:46:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.573 13:46:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70258076 kB' 'MemAvailable: 74214952 kB' 'Buffers: 9896 kB' 'Cached: 16493580 kB' 'SwapCached: 0 kB' 'Active: 13290752 kB' 'Inactive: 3731648 kB' 'Active(anon): 12734544 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522152 kB' 'Mapped: 209100 kB' 'Shmem: 12215620 kB' 'KReclaimable: 505204 kB' 'Slab: 920952 kB' 'SReclaimable: 505204 kB' 'SUnreclaim: 415748 kB' 'KernelStack: 16288 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14107056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214296 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.573 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.573 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.574 13:46:02 -- setup/common.sh@33 -- # echo 1024 00:04:11.574 13:46:02 -- setup/common.sh@33 -- # return 0 00:04:11.574 13:46:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:11.574 13:46:02 -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.574 13:46:02 -- setup/hugepages.sh@27 -- # local node 00:04:11.574 13:46:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.574 13:46:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.574 13:46:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.574 13:46:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.574 13:46:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:11.574 13:46:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.574 13:46:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.574 13:46:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.574 13:46:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.574 13:46:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.574 13:46:02 -- setup/common.sh@18 -- # local node=0 00:04:11.574 13:46:02 -- setup/common.sh@19 -- # local var val 00:04:11.574 13:46:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.574 13:46:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.574 13:46:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.574 13:46:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.574 13:46:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.574 13:46:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 40967260 kB' 'MemUsed: 7149708 kB' 'SwapCached: 0 kB' 'Active: 3920536 kB' 'Inactive: 285832 kB' 'Active(anon): 3502884 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 285832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3903284 kB' 'Mapped: 120772 kB' 'AnonPages: 306252 kB' 'Shmem: 3199800 kB' 'KernelStack: 8216 kB' 'PageTables: 5468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346388 kB' 'Slab: 580148 kB' 'SReclaimable: 346388 kB' 'SUnreclaim: 233760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.574 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.574 13:46:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.575 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.575 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@33 -- # echo 0 00:04:11.836 13:46:02 -- setup/common.sh@33 -- # return 0 00:04:11.836 13:46:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.836 13:46:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.836 13:46:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.836 13:46:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:11.836 13:46:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.836 13:46:02 -- setup/common.sh@18 -- # local node=1 00:04:11.836 13:46:02 -- setup/common.sh@19 -- # local var val 00:04:11.836 13:46:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.836 13:46:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.836 13:46:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:11.836 13:46:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:11.836 13:46:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.836 13:46:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176560 kB' 'MemFree: 29291448 kB' 'MemUsed: 14885112 kB' 'SwapCached: 0 kB' 'Active: 9370076 kB' 'Inactive: 3445816 kB' 'Active(anon): 9231520 kB' 'Inactive(anon): 0 kB' 'Active(file): 138556 kB' 'Inactive(file): 3445816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12600204 kB' 'Mapped: 88388 kB' 'AnonPages: 215760 kB' 'Shmem: 9015832 kB' 'KernelStack: 8008 kB' 'PageTables: 3220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158816 kB' 'Slab: 340804 kB' 'SReclaimable: 158816 kB' 'SUnreclaim: 181988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.836 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.836 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # continue 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.837 13:46:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.837 13:46:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.837 13:46:02 -- setup/common.sh@33 -- # echo 0 00:04:11.837 13:46:02 -- setup/common.sh@33 -- # return 0 00:04:11.837 13:46:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.837 13:46:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.837 13:46:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.837 13:46:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.837 13:46:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:11.837 node0=512 expecting 512 00:04:11.837 13:46:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.837 13:46:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.837 13:46:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.837 13:46:02 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:11.837 node1=512 expecting 512 00:04:11.837 13:46:02 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:11.837 00:04:11.837 real 0m6.277s 00:04:11.837 user 0m2.290s 00:04:11.837 sys 0m4.063s 00:04:11.837 13:46:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.837 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:04:11.837 ************************************ 00:04:11.837 END TEST per_node_1G_alloc 00:04:11.837 ************************************ 00:04:11.837 13:46:02 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:11.837 13:46:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.837 13:46:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.837 13:46:02 -- common/autotest_common.sh@10 -- # set +x 00:04:11.837 ************************************ 00:04:11.837 START TEST even_2G_alloc 00:04:11.837 ************************************ 00:04:11.837 13:46:02 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:11.837 13:46:02 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:11.837 13:46:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.837 13:46:02 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:11.837 13:46:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.837 13:46:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.837 13:46:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:11.837 13:46:02 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:11.837 13:46:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.837 13:46:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.837 13:46:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:11.837 13:46:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.837 13:46:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.837 13:46:02 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:11.837 13:46:02 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:11.837 13:46:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.837 13:46:02 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:11.837 13:46:02 -- setup/hugepages.sh@83 -- # : 512 00:04:11.837 13:46:02 -- setup/hugepages.sh@84 -- # : 1 00:04:11.837 13:46:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.837 13:46:02 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:11.837 13:46:02 -- setup/hugepages.sh@83 -- # : 0 00:04:11.837 13:46:02 -- setup/hugepages.sh@84 -- # : 0 00:04:11.837 13:46:02 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:11.837 13:46:02 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:11.837 13:46:02 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:11.837 13:46:02 -- setup/hugepages.sh@153 -- # setup output 00:04:11.837 13:46:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.837 13:46:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:16.028 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:16.028 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:16.028 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:16.029 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:16.029 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:16.029 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.937 13:46:08 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:17.937 13:46:08 -- setup/hugepages.sh@89 -- # local node 00:04:17.937 13:46:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.937 13:46:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.937 13:46:08 -- setup/hugepages.sh@92 -- # local surp 00:04:17.937 13:46:08 -- setup/hugepages.sh@93 -- # local resv 00:04:17.937 13:46:08 -- setup/hugepages.sh@94 -- # local anon 00:04:17.937 13:46:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.937 13:46:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.937 13:46:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.937 13:46:08 -- setup/common.sh@18 -- # local node= 00:04:17.937 13:46:08 -- setup/common.sh@19 -- # local var val 00:04:17.937 13:46:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.937 13:46:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.937 13:46:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.937 13:46:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.937 13:46:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.937 13:46:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70252120 kB' 'MemAvailable: 74208996 kB' 'Buffers: 9896 kB' 'Cached: 16493716 kB' 'SwapCached: 0 kB' 'Active: 13290940 kB' 'Inactive: 3731648 kB' 'Active(anon): 12734732 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522348 kB' 'Mapped: 209192 kB' 'Shmem: 12215756 kB' 'KReclaimable: 505204 kB' 'Slab: 920752 kB' 'SReclaimable: 505204 kB' 'SUnreclaim: 415548 kB' 'KernelStack: 16240 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14103968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214248 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.937 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.937 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.938 13:46:08 -- setup/common.sh@33 -- # echo 0 00:04:17.938 13:46:08 -- setup/common.sh@33 -- # return 0 00:04:17.938 13:46:08 -- setup/hugepages.sh@97 -- # anon=0 00:04:17.938 13:46:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.938 13:46:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.938 13:46:08 -- setup/common.sh@18 -- # local node= 00:04:17.938 13:46:08 -- setup/common.sh@19 -- # local var val 00:04:17.938 13:46:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.938 13:46:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.938 13:46:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.938 13:46:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.938 13:46:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.938 13:46:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.938 13:46:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70255080 kB' 'MemAvailable: 74211956 kB' 'Buffers: 9896 kB' 'Cached: 16493716 kB' 'SwapCached: 0 kB' 'Active: 13290988 kB' 'Inactive: 3731648 kB' 'Active(anon): 12734780 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522340 kB' 'Mapped: 209164 kB' 'Shmem: 12215756 kB' 'KReclaimable: 505204 kB' 'Slab: 920752 kB' 'SReclaimable: 505204 kB' 'SUnreclaim: 415548 kB' 'KernelStack: 16208 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14103980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214216 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.938 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.938 13:46:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.939 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.939 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.940 13:46:08 -- setup/common.sh@33 -- # echo 0 00:04:17.940 13:46:08 -- setup/common.sh@33 -- # return 0 00:04:17.940 13:46:08 -- setup/hugepages.sh@99 -- # surp=0 00:04:17.940 13:46:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.940 13:46:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.940 13:46:08 -- setup/common.sh@18 -- # local node= 00:04:17.940 13:46:08 -- setup/common.sh@19 -- # local var val 00:04:17.940 13:46:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.940 13:46:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.940 13:46:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.940 13:46:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.940 13:46:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.940 13:46:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70254076 kB' 'MemAvailable: 74210952 kB' 'Buffers: 9896 kB' 'Cached: 16493728 kB' 'SwapCached: 0 kB' 'Active: 13291168 kB' 'Inactive: 3731648 kB' 'Active(anon): 12734960 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522508 kB' 'Mapped: 209164 kB' 'Shmem: 12215768 kB' 'KReclaimable: 505204 kB' 'Slab: 920740 kB' 'SReclaimable: 505204 kB' 'SUnreclaim: 415536 kB' 'KernelStack: 16176 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14103992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214200 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.940 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.940 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.941 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.941 13:46:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.942 13:46:08 -- setup/common.sh@33 -- # echo 0 00:04:17.942 13:46:08 -- setup/common.sh@33 -- # return 0 00:04:17.942 13:46:08 -- setup/hugepages.sh@100 -- # resv=0 00:04:17.942 13:46:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.942 nr_hugepages=1024 00:04:17.942 13:46:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.942 resv_hugepages=0 00:04:17.942 13:46:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.942 surplus_hugepages=0 00:04:17.942 13:46:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.942 anon_hugepages=0 00:04:17.942 13:46:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.942 13:46:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.942 13:46:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.942 13:46:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.942 13:46:08 -- setup/common.sh@18 -- # local node= 00:04:17.942 13:46:08 -- setup/common.sh@19 -- # local var val 00:04:17.942 13:46:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.942 13:46:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.942 13:46:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.942 13:46:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.942 13:46:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.942 13:46:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70252620 kB' 'MemAvailable: 74209496 kB' 'Buffers: 9896 kB' 'Cached: 16493756 kB' 'SwapCached: 0 kB' 'Active: 13290768 kB' 'Inactive: 3731648 kB' 'Active(anon): 12734560 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522068 kB' 'Mapped: 209164 kB' 'Shmem: 12215796 kB' 'KReclaimable: 505204 kB' 'Slab: 920740 kB' 'SReclaimable: 505204 kB' 'SUnreclaim: 415536 kB' 'KernelStack: 16160 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14104008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214200 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.942 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.942 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.943 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.943 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.944 13:46:08 -- setup/common.sh@33 -- # echo 1024 00:04:17.944 13:46:08 -- setup/common.sh@33 -- # return 0 00:04:17.944 13:46:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.944 13:46:08 -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.944 13:46:08 -- setup/hugepages.sh@27 -- # local node 00:04:17.944 13:46:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.944 13:46:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.944 13:46:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.944 13:46:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.944 13:46:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.944 13:46:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.944 13:46:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.944 13:46:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.944 13:46:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.944 13:46:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.944 13:46:08 -- setup/common.sh@18 -- # local node=0 00:04:17.944 13:46:08 -- setup/common.sh@19 -- # local var val 00:04:17.944 13:46:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.944 13:46:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.944 13:46:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.944 13:46:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.944 13:46:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.944 13:46:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 40943256 kB' 'MemUsed: 7173712 kB' 'SwapCached: 0 kB' 'Active: 3920648 kB' 'Inactive: 285832 kB' 'Active(anon): 3502996 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 285832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3903388 kB' 'Mapped: 120832 kB' 'AnonPages: 306180 kB' 'Shmem: 3199904 kB' 'KernelStack: 8136 kB' 'PageTables: 5228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346388 kB' 'Slab: 580116 kB' 'SReclaimable: 346388 kB' 'SUnreclaim: 233728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.944 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.944 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@33 -- # echo 0 00:04:17.945 13:46:08 -- setup/common.sh@33 -- # return 0 00:04:17.945 13:46:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.945 13:46:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.945 13:46:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.945 13:46:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.945 13:46:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.945 13:46:08 -- setup/common.sh@18 -- # local node=1 00:04:17.945 13:46:08 -- setup/common.sh@19 -- # local var val 00:04:17.945 13:46:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.945 13:46:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.945 13:46:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.945 13:46:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.945 13:46:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.945 13:46:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.945 13:46:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176560 kB' 'MemFree: 29309364 kB' 'MemUsed: 14867196 kB' 'SwapCached: 0 kB' 'Active: 9370328 kB' 'Inactive: 3445816 kB' 'Active(anon): 9231772 kB' 'Inactive(anon): 0 kB' 'Active(file): 138556 kB' 'Inactive(file): 3445816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12600264 kB' 'Mapped: 88332 kB' 'AnonPages: 216100 kB' 'Shmem: 9015892 kB' 'KernelStack: 8008 kB' 'PageTables: 3116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158816 kB' 'Slab: 340624 kB' 'SReclaimable: 158816 kB' 'SUnreclaim: 181808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.945 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.945 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # continue 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.946 13:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.946 13:46:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.946 13:46:08 -- setup/common.sh@33 -- # echo 0 00:04:17.946 13:46:08 -- setup/common.sh@33 -- # return 0 00:04:17.946 13:46:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.946 13:46:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.946 13:46:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.946 13:46:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.946 13:46:08 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:17.946 node0=512 expecting 512 00:04:17.946 13:46:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.946 13:46:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.946 13:46:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.946 13:46:08 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:17.946 node1=512 expecting 512 00:04:17.946 13:46:08 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:17.946 00:04:17.946 real 0m6.210s 00:04:17.946 user 0m2.212s 00:04:17.946 sys 0m4.073s 00:04:17.946 13:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.946 13:46:08 -- common/autotest_common.sh@10 -- # set +x 00:04:17.946 ************************************ 00:04:17.946 END TEST even_2G_alloc 00:04:17.946 ************************************ 00:04:17.946 13:46:08 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:17.946 13:46:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:17.946 13:46:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:17.946 13:46:08 -- common/autotest_common.sh@10 -- # set +x 00:04:17.946 ************************************ 00:04:17.946 START TEST odd_alloc 00:04:17.946 ************************************ 00:04:17.946 13:46:08 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:17.946 13:46:08 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:17.946 13:46:08 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:17.946 13:46:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.946 13:46:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.946 13:46:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:17.946 13:46:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.946 13:46:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.946 13:46:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.946 13:46:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:17.946 13:46:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.946 13:46:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.946 13:46:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.946 13:46:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.946 13:46:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.946 13:46:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.946 13:46:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.946 13:46:08 -- setup/hugepages.sh@83 -- # : 513 00:04:17.946 13:46:08 -- setup/hugepages.sh@84 -- # : 1 00:04:17.946 13:46:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.946 13:46:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:17.946 13:46:08 -- setup/hugepages.sh@83 -- # : 0 00:04:17.946 13:46:08 -- setup/hugepages.sh@84 -- # : 0 00:04:17.946 13:46:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.946 13:46:08 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:17.946 13:46:08 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:17.946 13:46:08 -- setup/hugepages.sh@160 -- # setup output 00:04:17.946 13:46:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.946 13:46:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:22.142 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.142 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.142 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.042 13:46:15 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:24.042 13:46:15 -- setup/hugepages.sh@89 -- # local node 00:04:24.042 13:46:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.042 13:46:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.042 13:46:15 -- setup/hugepages.sh@92 -- # local surp 00:04:24.042 13:46:15 -- setup/hugepages.sh@93 -- # local resv 00:04:24.042 13:46:15 -- setup/hugepages.sh@94 -- # local anon 00:04:24.042 13:46:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.042 13:46:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.042 13:46:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.042 13:46:15 -- setup/common.sh@18 -- # local node= 00:04:24.042 13:46:15 -- setup/common.sh@19 -- # local var val 00:04:24.042 13:46:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.042 13:46:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.042 13:46:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.042 13:46:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.042 13:46:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.042 13:46:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 13:46:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70271124 kB' 'MemAvailable: 74227968 kB' 'Buffers: 9896 kB' 'Cached: 16493888 kB' 'SwapCached: 0 kB' 'Active: 13294816 kB' 'Inactive: 3731648 kB' 'Active(anon): 12738608 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526040 kB' 'Mapped: 210140 kB' 'Shmem: 12215928 kB' 'KReclaimable: 505172 kB' 'Slab: 921652 kB' 'SReclaimable: 505172 kB' 'SUnreclaim: 416480 kB' 'KernelStack: 16400 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 14139124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214264 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.042 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.042 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.043 13:46:15 -- setup/common.sh@33 -- # echo 0 00:04:24.043 13:46:15 -- setup/common.sh@33 -- # return 0 00:04:24.043 13:46:15 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.043 13:46:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.043 13:46:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.043 13:46:15 -- setup/common.sh@18 -- # local node= 00:04:24.043 13:46:15 -- setup/common.sh@19 -- # local var val 00:04:24.043 13:46:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.043 13:46:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.043 13:46:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.043 13:46:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.043 13:46:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.043 13:46:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.043 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.043 13:46:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70272300 kB' 'MemAvailable: 74229144 kB' 'Buffers: 9896 kB' 'Cached: 16493892 kB' 'SwapCached: 0 kB' 'Active: 13294644 kB' 'Inactive: 3731648 kB' 'Active(anon): 12738436 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525876 kB' 'Mapped: 210168 kB' 'Shmem: 12215932 kB' 'KReclaimable: 505172 kB' 'Slab: 921680 kB' 'SReclaimable: 505172 kB' 'SUnreclaim: 416508 kB' 'KernelStack: 16288 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 14139504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214248 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:24.044 13:46:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.044 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 13:46:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.044 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 13:46:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.044 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 13:46:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.044 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.044 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.044 13:46:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.044 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.044 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.305 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.305 13:46:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.305 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.305 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.305 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.305 13:46:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.305 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.305 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.305 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.305 13:46:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.305 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.305 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.306 13:46:15 -- setup/common.sh@33 -- # echo 0 00:04:24.306 13:46:15 -- setup/common.sh@33 -- # return 0 00:04:24.306 13:46:15 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.306 13:46:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.306 13:46:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.306 13:46:15 -- setup/common.sh@18 -- # local node= 00:04:24.306 13:46:15 -- setup/common.sh@19 -- # local var val 00:04:24.306 13:46:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.306 13:46:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.306 13:46:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.306 13:46:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.306 13:46:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.306 13:46:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.306 13:46:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70272444 kB' 'MemAvailable: 74229288 kB' 'Buffers: 9896 kB' 'Cached: 16493900 kB' 'SwapCached: 0 kB' 'Active: 13294616 kB' 'Inactive: 3731648 kB' 'Active(anon): 12738408 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525864 kB' 'Mapped: 210168 kB' 'Shmem: 12215940 kB' 'KReclaimable: 505172 kB' 'Slab: 921680 kB' 'SReclaimable: 505172 kB' 'SUnreclaim: 416508 kB' 'KernelStack: 16288 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 14139516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214248 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.306 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.306 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.307 13:46:15 -- setup/common.sh@33 -- # echo 0 00:04:24.307 13:46:15 -- setup/common.sh@33 -- # return 0 00:04:24.307 13:46:15 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.307 13:46:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:24.307 nr_hugepages=1025 00:04:24.307 13:46:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.307 resv_hugepages=0 00:04:24.307 13:46:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.307 surplus_hugepages=0 00:04:24.307 13:46:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.307 anon_hugepages=0 00:04:24.307 13:46:15 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.307 13:46:15 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:24.307 13:46:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.307 13:46:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.307 13:46:15 -- setup/common.sh@18 -- # local node= 00:04:24.307 13:46:15 -- setup/common.sh@19 -- # local var val 00:04:24.307 13:46:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.307 13:46:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.307 13:46:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.307 13:46:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.307 13:46:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.307 13:46:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70272444 kB' 'MemAvailable: 74229288 kB' 'Buffers: 9896 kB' 'Cached: 16493928 kB' 'SwapCached: 0 kB' 'Active: 13294272 kB' 'Inactive: 3731648 kB' 'Active(anon): 12738064 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525452 kB' 'Mapped: 210168 kB' 'Shmem: 12215968 kB' 'KReclaimable: 505172 kB' 'Slab: 921680 kB' 'SReclaimable: 505172 kB' 'SUnreclaim: 416508 kB' 'KernelStack: 16272 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485768 kB' 'Committed_AS: 14139532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214248 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.307 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.307 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.308 13:46:15 -- setup/common.sh@33 -- # echo 1025 00:04:24.308 13:46:15 -- setup/common.sh@33 -- # return 0 00:04:24.308 13:46:15 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.308 13:46:15 -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.308 13:46:15 -- setup/hugepages.sh@27 -- # local node 00:04:24.308 13:46:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.308 13:46:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.308 13:46:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.308 13:46:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:24.308 13:46:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.308 13:46:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.308 13:46:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.308 13:46:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.308 13:46:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.308 13:46:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.308 13:46:15 -- setup/common.sh@18 -- # local node=0 00:04:24.308 13:46:15 -- setup/common.sh@19 -- # local var val 00:04:24.308 13:46:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.308 13:46:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.308 13:46:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.308 13:46:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.308 13:46:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.308 13:46:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 40948884 kB' 'MemUsed: 7168084 kB' 'SwapCached: 0 kB' 'Active: 3922764 kB' 'Inactive: 285832 kB' 'Active(anon): 3505112 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 285832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3903540 kB' 'Mapped: 121816 kB' 'AnonPages: 308300 kB' 'Shmem: 3200056 kB' 'KernelStack: 8280 kB' 'PageTables: 5576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346388 kB' 'Slab: 580212 kB' 'SReclaimable: 346388 kB' 'SUnreclaim: 233824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@33 -- # echo 0 00:04:24.308 13:46:15 -- setup/common.sh@33 -- # return 0 00:04:24.308 13:46:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.308 13:46:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.308 13:46:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.308 13:46:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:24.308 13:46:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.308 13:46:15 -- setup/common.sh@18 -- # local node=1 00:04:24.308 13:46:15 -- setup/common.sh@19 -- # local var val 00:04:24.308 13:46:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.308 13:46:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.308 13:46:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:24.308 13:46:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:24.308 13:46:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.308 13:46:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176560 kB' 'MemFree: 29325400 kB' 'MemUsed: 14851160 kB' 'SwapCached: 0 kB' 'Active: 9371864 kB' 'Inactive: 3445816 kB' 'Active(anon): 9233308 kB' 'Inactive(anon): 0 kB' 'Active(file): 138556 kB' 'Inactive(file): 3445816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12600296 kB' 'Mapped: 88352 kB' 'AnonPages: 217496 kB' 'Shmem: 9015924 kB' 'KernelStack: 7960 kB' 'PageTables: 2980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158784 kB' 'Slab: 341468 kB' 'SReclaimable: 158784 kB' 'SUnreclaim: 182684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.308 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.308 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # continue 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.309 13:46:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.309 13:46:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.309 13:46:15 -- setup/common.sh@33 -- # echo 0 00:04:24.309 13:46:15 -- setup/common.sh@33 -- # return 0 00:04:24.309 13:46:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.309 13:46:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.309 13:46:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.309 13:46:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:24.309 node0=512 expecting 513 00:04:24.309 13:46:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.309 13:46:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.309 13:46:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.309 13:46:15 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:24.309 node1=513 expecting 512 00:04:24.309 13:46:15 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:24.309 00:04:24.309 real 0m6.304s 00:04:24.309 user 0m2.202s 00:04:24.309 sys 0m4.171s 00:04:24.309 13:46:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.309 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:04:24.309 ************************************ 00:04:24.309 END TEST odd_alloc 00:04:24.309 ************************************ 00:04:24.309 13:46:15 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:24.309 13:46:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.309 13:46:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.309 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:04:24.309 ************************************ 00:04:24.309 START TEST custom_alloc 00:04:24.309 ************************************ 00:04:24.309 13:46:15 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:24.309 13:46:15 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:24.309 13:46:15 -- setup/hugepages.sh@169 -- # local node 00:04:24.309 13:46:15 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:24.309 13:46:15 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:24.309 13:46:15 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:24.309 13:46:15 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:24.309 13:46:15 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:24.309 13:46:15 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:24.309 13:46:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.309 13:46:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.309 13:46:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.309 13:46:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.309 13:46:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.309 13:46:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.309 13:46:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.309 13:46:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:24.309 13:46:15 -- setup/hugepages.sh@83 -- # : 256 00:04:24.309 13:46:15 -- setup/hugepages.sh@84 -- # : 1 00:04:24.309 13:46:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:24.309 13:46:15 -- setup/hugepages.sh@83 -- # : 0 00:04:24.309 13:46:15 -- setup/hugepages.sh@84 -- # : 0 00:04:24.309 13:46:15 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:24.309 13:46:15 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:24.309 13:46:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.309 13:46:15 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.309 13:46:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.309 13:46:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.309 13:46:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.309 13:46:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.309 13:46:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.309 13:46:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.309 13:46:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.309 13:46:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.309 13:46:15 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:24.309 13:46:15 -- setup/hugepages.sh@78 -- # return 0 00:04:24.309 13:46:15 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:24.309 13:46:15 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:24.309 13:46:15 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:24.309 13:46:15 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:24.309 13:46:15 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:24.309 13:46:15 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:24.309 13:46:15 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.309 13:46:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.309 13:46:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.309 13:46:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.309 13:46:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.309 13:46:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.309 13:46:15 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:24.309 13:46:15 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.309 13:46:15 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:24.309 13:46:15 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.309 13:46:15 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:24.309 13:46:15 -- setup/hugepages.sh@78 -- # return 0 00:04:24.309 13:46:15 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:24.309 13:46:15 -- setup/hugepages.sh@187 -- # setup output 00:04:24.309 13:46:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.309 13:46:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:28.501 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:28.501 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:28.501 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:30.434 13:46:21 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:30.434 13:46:21 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:30.434 13:46:21 -- setup/hugepages.sh@89 -- # local node 00:04:30.434 13:46:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.434 13:46:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.434 13:46:21 -- setup/hugepages.sh@92 -- # local surp 00:04:30.434 13:46:21 -- setup/hugepages.sh@93 -- # local resv 00:04:30.434 13:46:21 -- setup/hugepages.sh@94 -- # local anon 00:04:30.434 13:46:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.434 13:46:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.434 13:46:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.434 13:46:21 -- setup/common.sh@18 -- # local node= 00:04:30.434 13:46:21 -- setup/common.sh@19 -- # local var val 00:04:30.434 13:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.434 13:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.434 13:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.434 13:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.434 13:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.434 13:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.434 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.434 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.434 13:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69261104 kB' 'MemAvailable: 73217916 kB' 'Buffers: 9896 kB' 'Cached: 16494048 kB' 'SwapCached: 0 kB' 'Active: 13293084 kB' 'Inactive: 3731648 kB' 'Active(anon): 12736876 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524160 kB' 'Mapped: 209416 kB' 'Shmem: 12216088 kB' 'KReclaimable: 505140 kB' 'Slab: 921660 kB' 'SReclaimable: 505140 kB' 'SUnreclaim: 416520 kB' 'KernelStack: 16256 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 14105592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214184 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:30.434 13:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.434 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.434 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.434 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.434 13:46:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.434 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.434 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.434 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.434 13:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.434 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.434 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.434 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.434 13:46:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.434 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.435 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.435 13:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.435 13:46:21 -- setup/common.sh@33 -- # echo 0 00:04:30.435 13:46:21 -- setup/common.sh@33 -- # return 0 00:04:30.435 13:46:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:30.435 13:46:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.435 13:46:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.435 13:46:21 -- setup/common.sh@18 -- # local node= 00:04:30.435 13:46:21 -- setup/common.sh@19 -- # local var val 00:04:30.435 13:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.435 13:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.436 13:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.436 13:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.436 13:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.436 13:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69264960 kB' 'MemAvailable: 73221772 kB' 'Buffers: 9896 kB' 'Cached: 16494048 kB' 'SwapCached: 0 kB' 'Active: 13293184 kB' 'Inactive: 3731648 kB' 'Active(anon): 12736976 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524244 kB' 'Mapped: 209296 kB' 'Shmem: 12216088 kB' 'KReclaimable: 505140 kB' 'Slab: 921628 kB' 'SReclaimable: 505140 kB' 'SUnreclaim: 416488 kB' 'KernelStack: 16224 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 14105604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214152 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.436 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.436 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.437 13:46:21 -- setup/common.sh@33 -- # echo 0 00:04:30.437 13:46:21 -- setup/common.sh@33 -- # return 0 00:04:30.437 13:46:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:30.437 13:46:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.437 13:46:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.437 13:46:21 -- setup/common.sh@18 -- # local node= 00:04:30.437 13:46:21 -- setup/common.sh@19 -- # local var val 00:04:30.437 13:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.437 13:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.437 13:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.437 13:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.437 13:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.437 13:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69265224 kB' 'MemAvailable: 73222036 kB' 'Buffers: 9896 kB' 'Cached: 16494048 kB' 'SwapCached: 0 kB' 'Active: 13293384 kB' 'Inactive: 3731648 kB' 'Active(anon): 12737176 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524444 kB' 'Mapped: 209296 kB' 'Shmem: 12216088 kB' 'KReclaimable: 505140 kB' 'Slab: 921628 kB' 'SReclaimable: 505140 kB' 'SUnreclaim: 416488 kB' 'KernelStack: 16224 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 14105616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214152 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.437 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.437 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.438 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.438 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.438 13:46:21 -- setup/common.sh@33 -- # echo 0 00:04:30.438 13:46:21 -- setup/common.sh@33 -- # return 0 00:04:30.439 13:46:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:30.439 13:46:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:30.439 nr_hugepages=1536 00:04:30.439 13:46:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.439 resv_hugepages=0 00:04:30.439 13:46:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.439 surplus_hugepages=0 00:04:30.439 13:46:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.439 anon_hugepages=0 00:04:30.439 13:46:21 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:30.439 13:46:21 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:30.439 13:46:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.439 13:46:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.439 13:46:21 -- setup/common.sh@18 -- # local node= 00:04:30.439 13:46:21 -- setup/common.sh@19 -- # local var val 00:04:30.439 13:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.439 13:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.439 13:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.439 13:46:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.439 13:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.439 13:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 69263216 kB' 'MemAvailable: 73220028 kB' 'Buffers: 9896 kB' 'Cached: 16494060 kB' 'SwapCached: 0 kB' 'Active: 13293760 kB' 'Inactive: 3731648 kB' 'Active(anon): 12737552 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524784 kB' 'Mapped: 209296 kB' 'Shmem: 12216100 kB' 'KReclaimable: 505140 kB' 'Slab: 921628 kB' 'SReclaimable: 505140 kB' 'SUnreclaim: 416488 kB' 'KernelStack: 16240 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962504 kB' 'Committed_AS: 14105632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214152 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.439 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.439 13:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.440 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.440 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.700 13:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.700 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.700 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.700 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.700 13:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.700 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.700 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.700 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.700 13:46:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.700 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.700 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.700 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.700 13:46:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.700 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.700 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.700 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.700 13:46:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.701 13:46:21 -- setup/common.sh@33 -- # echo 1536 00:04:30.701 13:46:21 -- setup/common.sh@33 -- # return 0 00:04:30.701 13:46:21 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:30.701 13:46:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.701 13:46:21 -- setup/hugepages.sh@27 -- # local node 00:04:30.701 13:46:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.701 13:46:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:30.701 13:46:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.701 13:46:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.701 13:46:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.701 13:46:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.701 13:46:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.701 13:46:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.701 13:46:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.701 13:46:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.701 13:46:21 -- setup/common.sh@18 -- # local node=0 00:04:30.701 13:46:21 -- setup/common.sh@19 -- # local var val 00:04:30.701 13:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.701 13:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.701 13:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.701 13:46:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.701 13:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.701 13:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 40954960 kB' 'MemUsed: 7162008 kB' 'SwapCached: 0 kB' 'Active: 3922100 kB' 'Inactive: 285832 kB' 'Active(anon): 3504448 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 285832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3903616 kB' 'Mapped: 120956 kB' 'AnonPages: 307548 kB' 'Shmem: 3200132 kB' 'KernelStack: 8248 kB' 'PageTables: 5528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346388 kB' 'Slab: 580116 kB' 'SReclaimable: 346388 kB' 'SUnreclaim: 233728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.701 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.701 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@33 -- # echo 0 00:04:30.702 13:46:21 -- setup/common.sh@33 -- # return 0 00:04:30.702 13:46:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.702 13:46:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.702 13:46:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.702 13:46:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:30.702 13:46:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.702 13:46:21 -- setup/common.sh@18 -- # local node=1 00:04:30.702 13:46:21 -- setup/common.sh@19 -- # local var val 00:04:30.702 13:46:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.702 13:46:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.702 13:46:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:30.702 13:46:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:30.702 13:46:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.702 13:46:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44176560 kB' 'MemFree: 28308472 kB' 'MemUsed: 15868088 kB' 'SwapCached: 0 kB' 'Active: 9372832 kB' 'Inactive: 3445816 kB' 'Active(anon): 9234276 kB' 'Inactive(anon): 0 kB' 'Active(file): 138556 kB' 'Inactive(file): 3445816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12600344 kB' 'Mapped: 88340 kB' 'AnonPages: 218452 kB' 'Shmem: 9015972 kB' 'KernelStack: 8008 kB' 'PageTables: 3144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158752 kB' 'Slab: 341512 kB' 'SReclaimable: 158752 kB' 'SUnreclaim: 182760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.702 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.702 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # continue 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.703 13:46:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.703 13:46:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.703 13:46:21 -- setup/common.sh@33 -- # echo 0 00:04:30.703 13:46:21 -- setup/common.sh@33 -- # return 0 00:04:30.703 13:46:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.703 13:46:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.703 13:46:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.703 13:46:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.703 13:46:21 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:30.703 node0=512 expecting 512 00:04:30.703 13:46:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.703 13:46:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.703 13:46:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.703 13:46:21 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:30.703 node1=1024 expecting 1024 00:04:30.703 13:46:21 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:30.703 00:04:30.703 real 0m6.254s 00:04:30.703 user 0m2.244s 00:04:30.703 sys 0m4.094s 00:04:30.703 13:46:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.703 13:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:30.703 ************************************ 00:04:30.703 END TEST custom_alloc 00:04:30.703 ************************************ 00:04:30.703 13:46:21 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:30.703 13:46:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.703 13:46:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.703 13:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:30.703 ************************************ 00:04:30.703 START TEST no_shrink_alloc 00:04:30.703 ************************************ 00:04:30.703 13:46:21 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:30.703 13:46:21 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:30.703 13:46:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:30.703 13:46:21 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:30.703 13:46:21 -- setup/hugepages.sh@51 -- # shift 00:04:30.703 13:46:21 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:30.703 13:46:21 -- setup/hugepages.sh@52 -- # local node_ids 00:04:30.703 13:46:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.703 13:46:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:30.703 13:46:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:30.703 13:46:21 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:30.703 13:46:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.703 13:46:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:30.703 13:46:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:30.703 13:46:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.703 13:46:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.703 13:46:21 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:30.703 13:46:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:30.703 13:46:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:30.703 13:46:21 -- setup/hugepages.sh@73 -- # return 0 00:04:30.703 13:46:21 -- setup/hugepages.sh@198 -- # setup output 00:04:30.703 13:46:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.703 13:46:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:34.897 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:34.897 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:34.897 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:34.897 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:34.897 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:34.897 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:34.897 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:34.897 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:34.897 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:34.897 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:34.897 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:34.897 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:34.897 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:34.898 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:34.898 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:34.898 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:34.898 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:36.806 13:46:27 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:36.806 13:46:27 -- setup/hugepages.sh@89 -- # local node 00:04:36.806 13:46:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.806 13:46:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.806 13:46:27 -- setup/hugepages.sh@92 -- # local surp 00:04:36.806 13:46:27 -- setup/hugepages.sh@93 -- # local resv 00:04:36.806 13:46:27 -- setup/hugepages.sh@94 -- # local anon 00:04:36.806 13:46:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.806 13:46:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.806 13:46:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.806 13:46:27 -- setup/common.sh@18 -- # local node= 00:04:36.806 13:46:27 -- setup/common.sh@19 -- # local var val 00:04:36.806 13:46:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.806 13:46:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.806 13:46:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.806 13:46:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.806 13:46:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.806 13:46:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70296164 kB' 'MemAvailable: 74252976 kB' 'Buffers: 9896 kB' 'Cached: 16494216 kB' 'SwapCached: 0 kB' 'Active: 13294276 kB' 'Inactive: 3731648 kB' 'Active(anon): 12738068 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525108 kB' 'Mapped: 209376 kB' 'Shmem: 12216256 kB' 'KReclaimable: 505140 kB' 'Slab: 920152 kB' 'SReclaimable: 505140 kB' 'SUnreclaim: 415012 kB' 'KernelStack: 16240 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14106700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214296 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.806 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.806 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.807 13:46:27 -- setup/common.sh@33 -- # echo 0 00:04:36.807 13:46:27 -- setup/common.sh@33 -- # return 0 00:04:36.807 13:46:27 -- setup/hugepages.sh@97 -- # anon=0 00:04:36.807 13:46:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.807 13:46:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.807 13:46:27 -- setup/common.sh@18 -- # local node= 00:04:36.807 13:46:27 -- setup/common.sh@19 -- # local var val 00:04:36.807 13:46:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.807 13:46:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.807 13:46:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.807 13:46:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.807 13:46:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.807 13:46:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70295652 kB' 'MemAvailable: 74252464 kB' 'Buffers: 9896 kB' 'Cached: 16494216 kB' 'SwapCached: 0 kB' 'Active: 13294096 kB' 'Inactive: 3731648 kB' 'Active(anon): 12737888 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524928 kB' 'Mapped: 209364 kB' 'Shmem: 12216256 kB' 'KReclaimable: 505140 kB' 'Slab: 920204 kB' 'SReclaimable: 505140 kB' 'SUnreclaim: 415064 kB' 'KernelStack: 16240 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14106712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214264 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.807 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.807 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.808 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.808 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.809 13:46:27 -- setup/common.sh@33 -- # echo 0 00:04:36.809 13:46:27 -- setup/common.sh@33 -- # return 0 00:04:36.809 13:46:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:36.809 13:46:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.809 13:46:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.809 13:46:27 -- setup/common.sh@18 -- # local node= 00:04:36.809 13:46:27 -- setup/common.sh@19 -- # local var val 00:04:36.809 13:46:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.809 13:46:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.809 13:46:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.809 13:46:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.809 13:46:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.809 13:46:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70296156 kB' 'MemAvailable: 74252968 kB' 'Buffers: 9896 kB' 'Cached: 16494228 kB' 'SwapCached: 0 kB' 'Active: 13294096 kB' 'Inactive: 3731648 kB' 'Active(anon): 12737888 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524920 kB' 'Mapped: 209364 kB' 'Shmem: 12216268 kB' 'KReclaimable: 505140 kB' 'Slab: 920204 kB' 'SReclaimable: 505140 kB' 'SUnreclaim: 415064 kB' 'KernelStack: 16240 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14106728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214264 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.809 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.809 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.810 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.810 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.811 13:46:27 -- setup/common.sh@33 -- # echo 0 00:04:36.811 13:46:27 -- setup/common.sh@33 -- # return 0 00:04:36.811 13:46:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:36.811 13:46:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:36.811 nr_hugepages=1024 00:04:36.811 13:46:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.811 resv_hugepages=0 00:04:36.811 13:46:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.811 surplus_hugepages=0 00:04:36.811 13:46:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.811 anon_hugepages=0 00:04:36.811 13:46:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.811 13:46:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:36.811 13:46:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.811 13:46:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.811 13:46:27 -- setup/common.sh@18 -- # local node= 00:04:36.811 13:46:27 -- setup/common.sh@19 -- # local var val 00:04:36.811 13:46:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.811 13:46:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.811 13:46:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.811 13:46:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.811 13:46:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.811 13:46:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70295156 kB' 'MemAvailable: 74251968 kB' 'Buffers: 9896 kB' 'Cached: 16494244 kB' 'SwapCached: 0 kB' 'Active: 13294084 kB' 'Inactive: 3731648 kB' 'Active(anon): 12737876 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524924 kB' 'Mapped: 209364 kB' 'Shmem: 12216284 kB' 'KReclaimable: 505140 kB' 'Slab: 920204 kB' 'SReclaimable: 505140 kB' 'SUnreclaim: 415064 kB' 'KernelStack: 16240 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14106988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214264 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.811 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.811 13:46:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.812 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.812 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.813 13:46:27 -- setup/common.sh@33 -- # echo 1024 00:04:36.813 13:46:27 -- setup/common.sh@33 -- # return 0 00:04:36.813 13:46:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.813 13:46:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.813 13:46:27 -- setup/hugepages.sh@27 -- # local node 00:04:36.813 13:46:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.813 13:46:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:36.813 13:46:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.813 13:46:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:36.813 13:46:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:36.813 13:46:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.813 13:46:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.813 13:46:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.813 13:46:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.813 13:46:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.813 13:46:27 -- setup/common.sh@18 -- # local node=0 00:04:36.813 13:46:27 -- setup/common.sh@19 -- # local var val 00:04:36.813 13:46:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.813 13:46:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.813 13:46:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.813 13:46:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.813 13:46:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.813 13:46:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 39909584 kB' 'MemUsed: 8207384 kB' 'SwapCached: 0 kB' 'Active: 3921604 kB' 'Inactive: 285832 kB' 'Active(anon): 3503952 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 285832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3903756 kB' 'Mapped: 121020 kB' 'AnonPages: 306820 kB' 'Shmem: 3200272 kB' 'KernelStack: 8216 kB' 'PageTables: 5412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346388 kB' 'Slab: 579852 kB' 'SReclaimable: 346388 kB' 'SUnreclaim: 233464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.813 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.813 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # continue 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.814 13:46:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.814 13:46:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.814 13:46:27 -- setup/common.sh@33 -- # echo 0 00:04:36.814 13:46:27 -- setup/common.sh@33 -- # return 0 00:04:36.814 13:46:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.814 13:46:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.814 13:46:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.814 13:46:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.814 13:46:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:36.814 node0=1024 expecting 1024 00:04:36.814 13:46:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:36.814 13:46:27 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:36.814 13:46:27 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:36.814 13:46:27 -- setup/hugepages.sh@202 -- # setup output 00:04:36.814 13:46:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.814 13:46:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:41.010 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:41.010 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:41.010 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:42.917 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:42.917 13:46:33 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:42.917 13:46:33 -- setup/hugepages.sh@89 -- # local node 00:04:42.917 13:46:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.917 13:46:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.917 13:46:33 -- setup/hugepages.sh@92 -- # local surp 00:04:42.917 13:46:33 -- setup/hugepages.sh@93 -- # local resv 00:04:42.917 13:46:33 -- setup/hugepages.sh@94 -- # local anon 00:04:42.917 13:46:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.917 13:46:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.917 13:46:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.917 13:46:33 -- setup/common.sh@18 -- # local node= 00:04:42.917 13:46:33 -- setup/common.sh@19 -- # local var val 00:04:42.917 13:46:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.917 13:46:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.917 13:46:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.917 13:46:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.917 13:46:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.917 13:46:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.917 13:46:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70286112 kB' 'MemAvailable: 74242964 kB' 'Buffers: 9896 kB' 'Cached: 16494356 kB' 'SwapCached: 0 kB' 'Active: 13297412 kB' 'Inactive: 3731648 kB' 'Active(anon): 12741204 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528164 kB' 'Mapped: 209552 kB' 'Shmem: 12216396 kB' 'KReclaimable: 505180 kB' 'Slab: 920180 kB' 'SReclaimable: 505180 kB' 'SUnreclaim: 415000 kB' 'KernelStack: 16256 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14111764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214392 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.917 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.917 13:46:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.918 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.918 13:46:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.918 13:46:33 -- setup/common.sh@33 -- # echo 0 00:04:42.918 13:46:33 -- setup/common.sh@33 -- # return 0 00:04:42.918 13:46:33 -- setup/hugepages.sh@97 -- # anon=0 00:04:42.918 13:46:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.918 13:46:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.919 13:46:33 -- setup/common.sh@18 -- # local node= 00:04:42.919 13:46:33 -- setup/common.sh@19 -- # local var val 00:04:42.919 13:46:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.919 13:46:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.919 13:46:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.919 13:46:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.919 13:46:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.919 13:46:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70285952 kB' 'MemAvailable: 74242804 kB' 'Buffers: 9896 kB' 'Cached: 16494360 kB' 'SwapCached: 0 kB' 'Active: 13297616 kB' 'Inactive: 3731648 kB' 'Active(anon): 12741408 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528280 kB' 'Mapped: 209500 kB' 'Shmem: 12216400 kB' 'KReclaimable: 505180 kB' 'Slab: 920188 kB' 'SReclaimable: 505180 kB' 'SUnreclaim: 415008 kB' 'KernelStack: 16544 kB' 'PageTables: 9052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14111780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214456 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.919 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.919 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.920 13:46:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.920 13:46:33 -- setup/common.sh@33 -- # echo 0 00:04:42.920 13:46:33 -- setup/common.sh@33 -- # return 0 00:04:42.920 13:46:33 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.920 13:46:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.920 13:46:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.920 13:46:33 -- setup/common.sh@18 -- # local node= 00:04:42.920 13:46:33 -- setup/common.sh@19 -- # local var val 00:04:42.920 13:46:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.920 13:46:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.920 13:46:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.920 13:46:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.920 13:46:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.920 13:46:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.920 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70286880 kB' 'MemAvailable: 74243732 kB' 'Buffers: 9896 kB' 'Cached: 16494372 kB' 'SwapCached: 0 kB' 'Active: 13298144 kB' 'Inactive: 3731648 kB' 'Active(anon): 12741936 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528908 kB' 'Mapped: 209500 kB' 'Shmem: 12216412 kB' 'KReclaimable: 505180 kB' 'Slab: 920188 kB' 'SReclaimable: 505180 kB' 'SUnreclaim: 415008 kB' 'KernelStack: 16560 kB' 'PageTables: 9396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14111796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214440 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.921 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.921 13:46:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.922 13:46:33 -- setup/common.sh@33 -- # echo 0 00:04:42.922 13:46:33 -- setup/common.sh@33 -- # return 0 00:04:42.922 13:46:33 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.922 13:46:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.922 nr_hugepages=1024 00:04:42.922 13:46:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.922 resv_hugepages=0 00:04:42.922 13:46:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.922 surplus_hugepages=0 00:04:42.922 13:46:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.922 anon_hugepages=0 00:04:42.922 13:46:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.922 13:46:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.922 13:46:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.922 13:46:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.922 13:46:33 -- setup/common.sh@18 -- # local node= 00:04:42.922 13:46:33 -- setup/common.sh@19 -- # local var val 00:04:42.922 13:46:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.922 13:46:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.922 13:46:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.922 13:46:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.922 13:46:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.922 13:46:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293528 kB' 'MemFree: 70285188 kB' 'MemAvailable: 74242040 kB' 'Buffers: 9896 kB' 'Cached: 16494396 kB' 'SwapCached: 0 kB' 'Active: 13297168 kB' 'Inactive: 3731648 kB' 'Active(anon): 12740960 kB' 'Inactive(anon): 0 kB' 'Active(file): 556208 kB' 'Inactive(file): 3731648 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527892 kB' 'Mapped: 209560 kB' 'Shmem: 12216436 kB' 'KReclaimable: 505180 kB' 'Slab: 920188 kB' 'SReclaimable: 505180 kB' 'SUnreclaim: 415008 kB' 'KernelStack: 16416 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486792 kB' 'Committed_AS: 14111808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214328 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1330624 kB' 'DirectMap2M: 30851072 kB' 'DirectMap1G: 69206016 kB' 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.922 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.922 13:46:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.923 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.923 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 13:46:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.923 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.923 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 13:46:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.923 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.923 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 13:46:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.923 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.923 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 13:46:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.923 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.923 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.923 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.923 13:46:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.923 13:46:33 -- setup/common.sh@32 -- # continue 00:04:42.923 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.183 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.183 13:46:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.183 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.183 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.183 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.183 13:46:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.183 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.183 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.183 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.183 13:46:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.183 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.183 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.184 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.184 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.185 13:46:33 -- setup/common.sh@33 -- # echo 1024 00:04:43.185 13:46:33 -- setup/common.sh@33 -- # return 0 00:04:43.185 13:46:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.185 13:46:33 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.185 13:46:33 -- setup/hugepages.sh@27 -- # local node 00:04:43.185 13:46:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.185 13:46:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.185 13:46:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.185 13:46:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:43.185 13:46:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.185 13:46:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.185 13:46:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.185 13:46:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.185 13:46:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.185 13:46:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.185 13:46:33 -- setup/common.sh@18 -- # local node=0 00:04:43.185 13:46:33 -- setup/common.sh@19 -- # local var val 00:04:43.185 13:46:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.185 13:46:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.185 13:46:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.185 13:46:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.185 13:46:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.185 13:46:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48116968 kB' 'MemFree: 39905004 kB' 'MemUsed: 8211964 kB' 'SwapCached: 0 kB' 'Active: 3922912 kB' 'Inactive: 285832 kB' 'Active(anon): 3505260 kB' 'Inactive(anon): 0 kB' 'Active(file): 417652 kB' 'Inactive(file): 285832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3903848 kB' 'Mapped: 121088 kB' 'AnonPages: 308080 kB' 'Shmem: 3200364 kB' 'KernelStack: 8216 kB' 'PageTables: 5412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 346388 kB' 'Slab: 579792 kB' 'SReclaimable: 346388 kB' 'SUnreclaim: 233404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.185 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.185 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # continue 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.186 13:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.186 13:46:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.186 13:46:33 -- setup/common.sh@33 -- # echo 0 00:04:43.186 13:46:33 -- setup/common.sh@33 -- # return 0 00:04:43.186 13:46:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.186 13:46:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.186 13:46:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.186 13:46:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.186 13:46:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.186 node0=1024 expecting 1024 00:04:43.186 13:46:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.186 00:04:43.186 real 0m12.397s 00:04:43.186 user 0m4.330s 00:04:43.186 sys 0m8.223s 00:04:43.186 13:46:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.186 13:46:33 -- common/autotest_common.sh@10 -- # set +x 00:04:43.186 ************************************ 00:04:43.186 END TEST no_shrink_alloc 00:04:43.186 ************************************ 00:04:43.186 13:46:34 -- setup/hugepages.sh@217 -- # clear_hp 00:04:43.186 13:46:34 -- setup/hugepages.sh@37 -- # local node hp 00:04:43.186 13:46:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.186 13:46:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.186 13:46:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:43.186 13:46:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.186 13:46:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:43.186 13:46:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.186 13:46:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.186 13:46:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:43.186 13:46:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.186 13:46:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:43.186 13:46:34 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:43.186 13:46:34 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:43.187 00:04:43.187 real 0m47.330s 00:04:43.187 user 0m15.706s 00:04:43.187 sys 0m29.165s 00:04:43.187 13:46:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.187 13:46:34 -- common/autotest_common.sh@10 -- # set +x 00:04:43.187 ************************************ 00:04:43.187 END TEST hugepages 00:04:43.187 ************************************ 00:04:43.187 13:46:34 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:43.187 13:46:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.187 13:46:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.187 13:46:34 -- common/autotest_common.sh@10 -- # set +x 00:04:43.187 ************************************ 00:04:43.187 START TEST driver 00:04:43.187 ************************************ 00:04:43.187 13:46:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:04:43.187 * Looking for test storage... 00:04:43.187 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:43.187 13:46:34 -- setup/driver.sh@68 -- # setup reset 00:04:43.187 13:46:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.187 13:46:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:51.318 13:46:41 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:51.318 13:46:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.318 13:46:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.318 13:46:41 -- common/autotest_common.sh@10 -- # set +x 00:04:51.318 ************************************ 00:04:51.318 START TEST guess_driver 00:04:51.318 ************************************ 00:04:51.318 13:46:41 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:51.318 13:46:41 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:51.318 13:46:41 -- setup/driver.sh@47 -- # local fail=0 00:04:51.318 13:46:41 -- setup/driver.sh@49 -- # pick_driver 00:04:51.318 13:46:41 -- setup/driver.sh@36 -- # vfio 00:04:51.318 13:46:41 -- setup/driver.sh@21 -- # local iommu_grups 00:04:51.318 13:46:41 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:51.318 13:46:41 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:51.318 13:46:41 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:51.318 13:46:41 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:51.318 13:46:41 -- setup/driver.sh@29 -- # (( 238 > 0 )) 00:04:51.318 13:46:41 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:51.318 13:46:41 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:51.318 13:46:41 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:51.318 13:46:41 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:51.318 13:46:41 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:51.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:51.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:51.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:51.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:51.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:51.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:51.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:51.318 13:46:41 -- setup/driver.sh@30 -- # return 0 00:04:51.318 13:46:41 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:51.318 13:46:41 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:51.318 13:46:41 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:51.318 13:46:41 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:51.318 Looking for driver=vfio-pci 00:04:51.318 13:46:41 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.318 13:46:41 -- setup/driver.sh@45 -- # setup output config 00:04:51.318 13:46:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.318 13:46:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:45 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.512 13:46:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.512 13:46:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.512 13:46:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.797 13:46:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.797 13:46:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.797 13:46:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.178 13:46:51 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:00.178 13:46:51 -- setup/driver.sh@65 -- # setup reset 00:05:00.178 13:46:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.178 13:46:51 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:08.368 00:05:08.368 real 0m16.863s 00:05:08.368 user 0m4.336s 00:05:08.368 sys 0m8.718s 00:05:08.368 13:46:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.368 13:46:58 -- common/autotest_common.sh@10 -- # set +x 00:05:08.368 ************************************ 00:05:08.368 END TEST guess_driver 00:05:08.368 ************************************ 00:05:08.368 00:05:08.368 real 0m24.549s 00:05:08.368 user 0m6.607s 00:05:08.368 sys 0m13.306s 00:05:08.368 13:46:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.368 13:46:58 -- common/autotest_common.sh@10 -- # set +x 00:05:08.368 ************************************ 00:05:08.368 END TEST driver 00:05:08.368 ************************************ 00:05:08.368 13:46:58 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:08.368 13:46:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:08.368 13:46:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.368 13:46:58 -- common/autotest_common.sh@10 -- # set +x 00:05:08.368 ************************************ 00:05:08.368 START TEST devices 00:05:08.368 ************************************ 00:05:08.368 13:46:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:05:08.368 * Looking for test storage... 00:05:08.368 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:08.368 13:46:58 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:08.368 13:46:58 -- setup/devices.sh@192 -- # setup reset 00:05:08.368 13:46:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.368 13:46:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:14.945 13:47:05 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:14.945 13:47:05 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:14.945 13:47:05 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:14.945 13:47:05 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:14.945 13:47:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:14.945 13:47:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:14.945 13:47:05 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:14.945 13:47:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:14.945 13:47:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:14.945 13:47:05 -- setup/devices.sh@196 -- # blocks=() 00:05:14.945 13:47:05 -- setup/devices.sh@196 -- # declare -a blocks 00:05:14.945 13:47:05 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:14.945 13:47:05 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:14.945 13:47:05 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:14.945 13:47:05 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:14.945 13:47:05 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:14.945 13:47:05 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:14.945 13:47:05 -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:05:14.945 13:47:05 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:05:14.945 13:47:05 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:14.945 13:47:05 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:14.945 13:47:05 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:14.945 No valid GPT data, bailing 00:05:14.945 13:47:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:14.945 13:47:05 -- scripts/common.sh@393 -- # pt= 00:05:14.945 13:47:05 -- scripts/common.sh@394 -- # return 1 00:05:14.945 13:47:05 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:14.945 13:47:05 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:14.945 13:47:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:14.945 13:47:05 -- setup/common.sh@80 -- # echo 4000787030016 00:05:14.945 13:47:05 -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:05:14.945 13:47:05 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:14.945 13:47:05 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:05:14.945 13:47:05 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:14.945 13:47:05 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:14.945 13:47:05 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:14.945 13:47:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.945 13:47:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.945 13:47:05 -- common/autotest_common.sh@10 -- # set +x 00:05:14.945 ************************************ 00:05:14.945 START TEST nvme_mount 00:05:14.945 ************************************ 00:05:14.945 13:47:05 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:14.945 13:47:05 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:14.945 13:47:05 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:14.945 13:47:05 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.945 13:47:05 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.945 13:47:05 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:14.945 13:47:05 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:14.945 13:47:05 -- setup/common.sh@40 -- # local part_no=1 00:05:14.945 13:47:05 -- setup/common.sh@41 -- # local size=1073741824 00:05:14.945 13:47:05 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:14.945 13:47:05 -- setup/common.sh@44 -- # parts=() 00:05:14.945 13:47:05 -- setup/common.sh@44 -- # local parts 00:05:14.945 13:47:05 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:14.945 13:47:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.945 13:47:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:14.945 13:47:05 -- setup/common.sh@46 -- # (( part++ )) 00:05:14.945 13:47:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:14.945 13:47:05 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:14.945 13:47:05 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:14.945 13:47:05 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:15.514 Creating new GPT entries in memory. 00:05:15.514 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:15.514 other utilities. 00:05:15.514 13:47:06 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:15.514 13:47:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:15.514 13:47:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:15.514 13:47:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:15.514 13:47:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:16.452 Creating new GPT entries in memory. 00:05:16.452 The operation has completed successfully. 00:05:16.452 13:47:07 -- setup/common.sh@57 -- # (( part++ )) 00:05:16.452 13:47:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.452 13:47:07 -- setup/common.sh@62 -- # wait 3869845 00:05:16.452 13:47:07 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.452 13:47:07 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:16.452 13:47:07 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.452 13:47:07 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:16.452 13:47:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:16.712 13:47:07 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.712 13:47:07 -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:16.712 13:47:07 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:16.712 13:47:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:16.712 13:47:07 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:16.712 13:47:07 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:16.712 13:47:07 -- setup/devices.sh@53 -- # local found=0 00:05:16.712 13:47:07 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:16.712 13:47:07 -- setup/devices.sh@56 -- # : 00:05:16.712 13:47:07 -- setup/devices.sh@59 -- # local pci status 00:05:16.712 13:47:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:16.712 13:47:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:16.712 13:47:07 -- setup/devices.sh@47 -- # setup output config 00:05:16.712 13:47:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.712 13:47:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:20.905 13:47:11 -- setup/devices.sh@63 -- # found=1 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.905 13:47:11 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:20.905 13:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.282 13:47:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.282 13:47:13 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:22.282 13:47:13 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.282 13:47:13 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.282 13:47:13 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:22.282 13:47:13 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:22.282 13:47:13 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.283 13:47:13 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.283 13:47:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.283 13:47:13 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:22.283 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.283 13:47:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.283 13:47:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.543 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:22.543 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:22.543 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:22.543 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:22.543 13:47:13 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:22.543 13:47:13 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:22.543 13:47:13 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.543 13:47:13 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:22.543 13:47:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:22.543 13:47:13 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.803 13:47:13 -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:22.803 13:47:13 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:22.803 13:47:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:22.803 13:47:13 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.803 13:47:13 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:22.803 13:47:13 -- setup/devices.sh@53 -- # local found=0 00:05:22.803 13:47:13 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:22.803 13:47:13 -- setup/devices.sh@56 -- # : 00:05:22.803 13:47:13 -- setup/devices.sh@59 -- # local pci status 00:05:22.803 13:47:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.803 13:47:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:22.803 13:47:13 -- setup/devices.sh@47 -- # setup output config 00:05:22.803 13:47:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.803 13:47:13 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:26.996 13:47:17 -- setup/devices.sh@63 -- # found=1 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.996 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.996 13:47:17 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:26.997 13:47:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.901 13:47:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.901 13:47:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:28.901 13:47:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.901 13:47:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.901 13:47:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:28.901 13:47:19 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.901 13:47:19 -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:05:28.901 13:47:19 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:28.901 13:47:19 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:28.901 13:47:19 -- setup/devices.sh@50 -- # local mount_point= 00:05:28.901 13:47:19 -- setup/devices.sh@51 -- # local test_file= 00:05:28.901 13:47:19 -- setup/devices.sh@53 -- # local found=0 00:05:28.901 13:47:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:28.901 13:47:19 -- setup/devices.sh@59 -- # local pci status 00:05:28.901 13:47:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.901 13:47:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:28.901 13:47:19 -- setup/devices.sh@47 -- # setup output config 00:05:28.901 13:47:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.901 13:47:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:33.139 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.139 13:47:23 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:33.139 13:47:23 -- setup/devices.sh@63 -- # found=1 00:05:33.139 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.139 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.139 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.139 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.139 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.139 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.139 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.140 13:47:23 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:33.140 13:47:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.045 13:47:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.045 13:47:25 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:35.045 13:47:25 -- setup/devices.sh@68 -- # return 0 00:05:35.045 13:47:25 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:35.045 13:47:25 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:35.045 13:47:25 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:35.045 13:47:25 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:35.045 13:47:25 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:35.045 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:35.045 00:05:35.045 real 0m20.404s 00:05:35.045 user 0m6.099s 00:05:35.045 sys 0m12.153s 00:05:35.045 13:47:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.045 13:47:25 -- common/autotest_common.sh@10 -- # set +x 00:05:35.045 ************************************ 00:05:35.045 END TEST nvme_mount 00:05:35.045 ************************************ 00:05:35.045 13:47:25 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:35.045 13:47:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.045 13:47:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.045 13:47:25 -- common/autotest_common.sh@10 -- # set +x 00:05:35.045 ************************************ 00:05:35.045 START TEST dm_mount 00:05:35.045 ************************************ 00:05:35.045 13:47:25 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:35.045 13:47:25 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:35.045 13:47:25 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:35.045 13:47:25 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:35.045 13:47:25 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:35.045 13:47:25 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:35.045 13:47:25 -- setup/common.sh@40 -- # local part_no=2 00:05:35.045 13:47:25 -- setup/common.sh@41 -- # local size=1073741824 00:05:35.045 13:47:25 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:35.045 13:47:25 -- setup/common.sh@44 -- # parts=() 00:05:35.045 13:47:25 -- setup/common.sh@44 -- # local parts 00:05:35.045 13:47:25 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:35.045 13:47:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.045 13:47:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:35.045 13:47:25 -- setup/common.sh@46 -- # (( part++ )) 00:05:35.045 13:47:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.045 13:47:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:35.045 13:47:25 -- setup/common.sh@46 -- # (( part++ )) 00:05:35.045 13:47:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:35.045 13:47:25 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:35.045 13:47:25 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:35.045 13:47:25 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:35.982 Creating new GPT entries in memory. 00:05:35.982 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:35.982 other utilities. 00:05:35.982 13:47:26 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:35.982 13:47:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:35.982 13:47:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:35.982 13:47:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:35.982 13:47:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:36.920 Creating new GPT entries in memory. 00:05:36.920 The operation has completed successfully. 00:05:36.920 13:47:27 -- setup/common.sh@57 -- # (( part++ )) 00:05:36.920 13:47:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:36.920 13:47:27 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:36.920 13:47:27 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:36.920 13:47:27 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:38.301 The operation has completed successfully. 00:05:38.301 13:47:28 -- setup/common.sh@57 -- # (( part++ )) 00:05:38.301 13:47:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.301 13:47:28 -- setup/common.sh@62 -- # wait 3875313 00:05:38.301 13:47:28 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:38.301 13:47:28 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:38.301 13:47:28 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.301 13:47:28 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:38.301 13:47:28 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:38.301 13:47:28 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.301 13:47:28 -- setup/devices.sh@161 -- # break 00:05:38.301 13:47:28 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.301 13:47:28 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:38.301 13:47:28 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:38.301 13:47:28 -- setup/devices.sh@166 -- # dm=dm-0 00:05:38.301 13:47:28 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:38.301 13:47:28 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:38.301 13:47:28 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:38.301 13:47:28 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:05:38.301 13:47:28 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:38.301 13:47:28 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:38.301 13:47:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:38.301 13:47:29 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:38.301 13:47:29 -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.301 13:47:29 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:38.301 13:47:29 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:38.301 13:47:29 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:38.301 13:47:29 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:38.301 13:47:29 -- setup/devices.sh@53 -- # local found=0 00:05:38.301 13:47:29 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:38.301 13:47:29 -- setup/devices.sh@56 -- # : 00:05:38.301 13:47:29 -- setup/devices.sh@59 -- # local pci status 00:05:38.301 13:47:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.301 13:47:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:38.301 13:47:29 -- setup/devices.sh@47 -- # setup output config 00:05:38.301 13:47:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.301 13:47:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:42.494 13:47:32 -- setup/devices.sh@63 -- # found=1 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.494 13:47:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:42.494 13:47:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.400 13:47:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.400 13:47:35 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:44.400 13:47:35 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:44.400 13:47:35 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:44.400 13:47:35 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:44.400 13:47:35 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:44.400 13:47:35 -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:44.400 13:47:35 -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:05:44.400 13:47:35 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:44.400 13:47:35 -- setup/devices.sh@50 -- # local mount_point= 00:05:44.400 13:47:35 -- setup/devices.sh@51 -- # local test_file= 00:05:44.400 13:47:35 -- setup/devices.sh@53 -- # local found=0 00:05:44.400 13:47:35 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:44.400 13:47:35 -- setup/devices.sh@59 -- # local pci status 00:05:44.400 13:47:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.400 13:47:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:05:44.400 13:47:35 -- setup/devices.sh@47 -- # setup output config 00:05:44.400 13:47:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.400 13:47:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:48.594 13:47:38 -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:38 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:48.594 13:47:38 -- setup/devices.sh@63 -- # found=1 00:05:48.594 13:47:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:38 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:39 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.594 13:47:39 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:05:48.594 13:47:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.498 13:47:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:50.498 13:47:41 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:50.498 13:47:41 -- setup/devices.sh@68 -- # return 0 00:05:50.498 13:47:41 -- setup/devices.sh@187 -- # cleanup_dm 00:05:50.498 13:47:41 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:50.499 13:47:41 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:50.499 13:47:41 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:50.499 13:47:41 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:50.499 13:47:41 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:50.499 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:50.499 13:47:41 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:50.499 13:47:41 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:50.499 00:05:50.499 real 0m15.369s 00:05:50.499 user 0m4.248s 00:05:50.499 sys 0m8.195s 00:05:50.499 13:47:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.499 13:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:50.499 ************************************ 00:05:50.499 END TEST dm_mount 00:05:50.499 ************************************ 00:05:50.499 13:47:41 -- setup/devices.sh@1 -- # cleanup 00:05:50.499 13:47:41 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:50.499 13:47:41 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:05:50.499 13:47:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:50.499 13:47:41 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:50.499 13:47:41 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:50.499 13:47:41 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:50.499 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:50.499 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:50.499 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:50.499 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:50.499 13:47:41 -- setup/devices.sh@12 -- # cleanup_dm 00:05:50.499 13:47:41 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:05:50.758 13:47:41 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:50.758 13:47:41 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:50.758 13:47:41 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:50.758 13:47:41 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:50.758 13:47:41 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:50.758 00:05:50.758 real 0m42.844s 00:05:50.758 user 0m12.635s 00:05:50.758 sys 0m25.043s 00:05:50.758 13:47:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.758 13:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:50.758 ************************************ 00:05:50.758 END TEST devices 00:05:50.758 ************************************ 00:05:50.758 00:05:50.758 real 2m35.207s 00:05:50.758 user 0m47.056s 00:05:50.758 sys 1m32.285s 00:05:50.758 13:47:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.758 13:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:50.758 ************************************ 00:05:50.758 END TEST setup.sh 00:05:50.758 ************************************ 00:05:50.758 13:47:41 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:54.952 Hugepages 00:05:54.952 node hugesize free / total 00:05:54.952 node0 1048576kB 0 / 0 00:05:54.952 node0 2048kB 2048 / 2048 00:05:54.952 node1 1048576kB 0 / 0 00:05:54.952 node1 2048kB 0 / 0 00:05:54.952 00:05:54.952 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:54.952 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:54.952 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:54.952 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:54.952 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:54.952 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:54.952 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:54.952 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:54.952 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:54.952 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:54.952 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:54.952 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:54.952 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:54.952 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:54.952 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:54.952 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:54.952 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:54.952 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:54.952 13:47:45 -- spdk/autotest.sh@141 -- # uname -s 00:05:54.952 13:47:45 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:54.952 13:47:45 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:54.952 13:47:45 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:05:59.145 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:59.145 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:59.145 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:59.145 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:59.145 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:59.145 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:59.145 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:59.145 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:59.145 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:59.145 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:59.146 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:59.146 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:59.146 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:59.146 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:59.146 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:59.146 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:02.516 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:04.424 13:47:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:05.362 13:47:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:05.362 13:47:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:05.362 13:47:56 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:06:05.362 13:47:56 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:06:05.362 13:47:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:05.362 13:47:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:05.362 13:47:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:05.362 13:47:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:05.362 13:47:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:05.362 13:47:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:05.362 13:47:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:1a:00.0 00:06:05.362 13:47:56 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:09.558 Waiting for block devices as requested 00:06:09.558 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:06:09.558 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:09.558 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:09.558 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:09.819 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:09.819 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:09.819 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:10.078 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:10.078 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:10.337 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:10.337 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:10.337 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:10.596 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:10.596 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:10.596 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:10.855 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:10.855 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:12.759 13:48:03 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:06:12.759 13:48:03 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:06:12.759 13:48:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:12.759 13:48:03 -- common/autotest_common.sh@1487 -- # grep 0000:1a:00.0/nvme/nvme 00:06:12.759 13:48:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:06:12.759 13:48:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:06:12.759 13:48:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:06:12.759 13:48:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:12.759 13:48:03 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:06:12.759 13:48:03 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:06:12.759 13:48:03 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:06:12.759 13:48:03 -- common/autotest_common.sh@1530 -- # grep oacs 00:06:12.759 13:48:03 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:06:12.759 13:48:03 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:06:12.759 13:48:03 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:06:12.759 13:48:03 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:06:12.759 13:48:03 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:06:12.759 13:48:03 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:06:12.759 13:48:03 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:06:12.759 13:48:03 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:06:12.759 13:48:03 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:06:12.759 13:48:03 -- common/autotest_common.sh@1542 -- # continue 00:06:12.759 13:48:03 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:06:12.759 13:48:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:12.759 13:48:03 -- common/autotest_common.sh@10 -- # set +x 00:06:12.759 13:48:03 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:06:12.759 13:48:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:12.759 13:48:03 -- common/autotest_common.sh@10 -- # set +x 00:06:12.759 13:48:03 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:16.951 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:16.951 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:16.951 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:16.951 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:16.951 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:16.951 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:16.951 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:16.951 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:17.210 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:17.210 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:17.210 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:17.210 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:17.210 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:17.210 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:17.210 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:17.210 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:20.499 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:22.401 13:48:13 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:06:22.401 13:48:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:22.401 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:06:22.401 13:48:13 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:06:22.402 13:48:13 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:22.402 13:48:13 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:22.402 13:48:13 -- common/autotest_common.sh@1562 -- # bdfs=() 00:06:22.402 13:48:13 -- common/autotest_common.sh@1562 -- # local bdfs 00:06:22.402 13:48:13 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:22.402 13:48:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:22.402 13:48:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:22.402 13:48:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:22.402 13:48:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:22.402 13:48:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:22.402 13:48:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:22.402 13:48:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:1a:00.0 00:06:22.402 13:48:13 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:06:22.402 13:48:13 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:06:22.402 13:48:13 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:06:22.402 13:48:13 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:22.402 13:48:13 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:06:22.402 13:48:13 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:1a:00.0 00:06:22.402 13:48:13 -- common/autotest_common.sh@1577 -- # [[ -z 0000:1a:00.0 ]] 00:06:22.402 13:48:13 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3887259 00:06:22.402 13:48:13 -- common/autotest_common.sh@1583 -- # waitforlisten 3887259 00:06:22.402 13:48:13 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.402 13:48:13 -- common/autotest_common.sh@819 -- # '[' -z 3887259 ']' 00:06:22.402 13:48:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.402 13:48:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.402 13:48:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.402 13:48:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.402 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:06:22.402 [2024-07-23 13:48:13.319117] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:22.402 [2024-07-23 13:48:13.319194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887259 ] 00:06:22.402 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.661 [2024-07-23 13:48:13.442800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.661 [2024-07-23 13:48:13.541724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.661 [2024-07-23 13:48:13.541876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.598 13:48:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.598 13:48:14 -- common/autotest_common.sh@852 -- # return 0 00:06:23.598 13:48:14 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:06:23.598 13:48:14 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:06:23.598 13:48:14 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:06:26.888 nvme0n1 00:06:26.888 13:48:17 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:26.888 [2024-07-23 13:48:17.599853] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:26.888 request: 00:06:26.888 { 00:06:26.888 "nvme_ctrlr_name": "nvme0", 00:06:26.888 "password": "test", 00:06:26.888 "method": "bdev_nvme_opal_revert", 00:06:26.888 "req_id": 1 00:06:26.888 } 00:06:26.888 Got JSON-RPC error response 00:06:26.888 response: 00:06:26.888 { 00:06:26.888 "code": -32602, 00:06:26.888 "message": "Invalid parameters" 00:06:26.888 } 00:06:26.888 13:48:17 -- common/autotest_common.sh@1589 -- # true 00:06:26.888 13:48:17 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:06:26.888 13:48:17 -- common/autotest_common.sh@1593 -- # killprocess 3887259 00:06:26.888 13:48:17 -- common/autotest_common.sh@926 -- # '[' -z 3887259 ']' 00:06:26.888 13:48:17 -- common/autotest_common.sh@930 -- # kill -0 3887259 00:06:26.888 13:48:17 -- common/autotest_common.sh@931 -- # uname 00:06:26.888 13:48:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.888 13:48:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3887259 00:06:26.888 13:48:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.888 13:48:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.888 13:48:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3887259' 00:06:26.888 killing process with pid 3887259 00:06:26.888 13:48:17 -- common/autotest_common.sh@945 -- # kill 3887259 00:06:26.888 13:48:17 -- common/autotest_common.sh@950 -- # wait 3887259 00:06:31.119 13:48:21 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:06:31.119 13:48:21 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:06:31.119 13:48:21 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:06:31.119 13:48:21 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:06:31.119 13:48:21 -- spdk/autotest.sh@173 -- # timing_enter lib 00:06:31.119 13:48:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:31.119 13:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:31.119 13:48:21 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:31.119 13:48:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:31.119 13:48:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.119 13:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:31.119 ************************************ 00:06:31.119 START TEST env 00:06:31.119 ************************************ 00:06:31.119 13:48:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:06:31.119 * Looking for test storage... 00:06:31.119 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:06:31.119 13:48:21 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:31.119 13:48:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:31.119 13:48:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.119 13:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:31.119 ************************************ 00:06:31.119 START TEST env_memory 00:06:31.119 ************************************ 00:06:31.119 13:48:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:06:31.119 00:06:31.119 00:06:31.119 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.119 http://cunit.sourceforge.net/ 00:06:31.119 00:06:31.119 00:06:31.119 Suite: memory 00:06:31.119 Test: alloc and free memory map ...[2024-07-23 13:48:21.766197] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:31.119 passed 00:06:31.119 Test: mem map translation ...[2024-07-23 13:48:21.786626] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:31.119 [2024-07-23 13:48:21.786651] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:31.119 [2024-07-23 13:48:21.786698] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:31.119 [2024-07-23 13:48:21.786711] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:31.119 passed 00:06:31.119 Test: mem map registration ...[2024-07-23 13:48:21.821022] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:31.119 [2024-07-23 13:48:21.821046] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:31.119 passed 00:06:31.119 Test: mem map adjacent registrations ...passed 00:06:31.119 00:06:31.119 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.119 suites 1 1 n/a 0 0 00:06:31.119 tests 4 4 4 0 0 00:06:31.119 asserts 152 152 152 0 n/a 00:06:31.119 00:06:31.119 Elapsed time = 0.124 seconds 00:06:31.119 00:06:31.119 real 0m0.139s 00:06:31.119 user 0m0.127s 00:06:31.119 sys 0m0.011s 00:06:31.120 13:48:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.120 13:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:31.120 ************************************ 00:06:31.120 END TEST env_memory 00:06:31.120 ************************************ 00:06:31.120 13:48:21 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:31.120 13:48:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:31.120 13:48:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.120 13:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:31.120 ************************************ 00:06:31.120 START TEST env_vtophys 00:06:31.120 ************************************ 00:06:31.120 13:48:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:31.120 EAL: lib.eal log level changed from notice to debug 00:06:31.120 EAL: Detected lcore 0 as core 0 on socket 0 00:06:31.120 EAL: Detected lcore 1 as core 1 on socket 0 00:06:31.120 EAL: Detected lcore 2 as core 2 on socket 0 00:06:31.120 EAL: Detected lcore 3 as core 3 on socket 0 00:06:31.120 EAL: Detected lcore 4 as core 4 on socket 0 00:06:31.120 EAL: Detected lcore 5 as core 8 on socket 0 00:06:31.120 EAL: Detected lcore 6 as core 9 on socket 0 00:06:31.120 EAL: Detected lcore 7 as core 10 on socket 0 00:06:31.120 EAL: Detected lcore 8 as core 11 on socket 0 00:06:31.120 EAL: Detected lcore 9 as core 16 on socket 0 00:06:31.120 EAL: Detected lcore 10 as core 17 on socket 0 00:06:31.120 EAL: Detected lcore 11 as core 18 on socket 0 00:06:31.120 EAL: Detected lcore 12 as core 19 on socket 0 00:06:31.120 EAL: Detected lcore 13 as core 20 on socket 0 00:06:31.120 EAL: Detected lcore 14 as core 24 on socket 0 00:06:31.120 EAL: Detected lcore 15 as core 25 on socket 0 00:06:31.120 EAL: Detected lcore 16 as core 26 on socket 0 00:06:31.120 EAL: Detected lcore 17 as core 27 on socket 0 00:06:31.120 EAL: Detected lcore 18 as core 0 on socket 1 00:06:31.120 EAL: Detected lcore 19 as core 1 on socket 1 00:06:31.120 EAL: Detected lcore 20 as core 2 on socket 1 00:06:31.120 EAL: Detected lcore 21 as core 3 on socket 1 00:06:31.120 EAL: Detected lcore 22 as core 4 on socket 1 00:06:31.120 EAL: Detected lcore 23 as core 8 on socket 1 00:06:31.120 EAL: Detected lcore 24 as core 9 on socket 1 00:06:31.120 EAL: Detected lcore 25 as core 10 on socket 1 00:06:31.120 EAL: Detected lcore 26 as core 11 on socket 1 00:06:31.120 EAL: Detected lcore 27 as core 16 on socket 1 00:06:31.120 EAL: Detected lcore 28 as core 17 on socket 1 00:06:31.120 EAL: Detected lcore 29 as core 18 on socket 1 00:06:31.120 EAL: Detected lcore 30 as core 19 on socket 1 00:06:31.120 EAL: Detected lcore 31 as core 20 on socket 1 00:06:31.120 EAL: Detected lcore 32 as core 24 on socket 1 00:06:31.120 EAL: Detected lcore 33 as core 25 on socket 1 00:06:31.120 EAL: Detected lcore 34 as core 26 on socket 1 00:06:31.120 EAL: Detected lcore 35 as core 27 on socket 1 00:06:31.120 EAL: Detected lcore 36 as core 0 on socket 0 00:06:31.120 EAL: Detected lcore 37 as core 1 on socket 0 00:06:31.120 EAL: Detected lcore 38 as core 2 on socket 0 00:06:31.120 EAL: Detected lcore 39 as core 3 on socket 0 00:06:31.120 EAL: Detected lcore 40 as core 4 on socket 0 00:06:31.120 EAL: Detected lcore 41 as core 8 on socket 0 00:06:31.120 EAL: Detected lcore 42 as core 9 on socket 0 00:06:31.120 EAL: Detected lcore 43 as core 10 on socket 0 00:06:31.120 EAL: Detected lcore 44 as core 11 on socket 0 00:06:31.120 EAL: Detected lcore 45 as core 16 on socket 0 00:06:31.120 EAL: Detected lcore 46 as core 17 on socket 0 00:06:31.120 EAL: Detected lcore 47 as core 18 on socket 0 00:06:31.120 EAL: Detected lcore 48 as core 19 on socket 0 00:06:31.120 EAL: Detected lcore 49 as core 20 on socket 0 00:06:31.120 EAL: Detected lcore 50 as core 24 on socket 0 00:06:31.120 EAL: Detected lcore 51 as core 25 on socket 0 00:06:31.120 EAL: Detected lcore 52 as core 26 on socket 0 00:06:31.120 EAL: Detected lcore 53 as core 27 on socket 0 00:06:31.120 EAL: Detected lcore 54 as core 0 on socket 1 00:06:31.120 EAL: Detected lcore 55 as core 1 on socket 1 00:06:31.120 EAL: Detected lcore 56 as core 2 on socket 1 00:06:31.120 EAL: Detected lcore 57 as core 3 on socket 1 00:06:31.120 EAL: Detected lcore 58 as core 4 on socket 1 00:06:31.120 EAL: Detected lcore 59 as core 8 on socket 1 00:06:31.120 EAL: Detected lcore 60 as core 9 on socket 1 00:06:31.120 EAL: Detected lcore 61 as core 10 on socket 1 00:06:31.120 EAL: Detected lcore 62 as core 11 on socket 1 00:06:31.120 EAL: Detected lcore 63 as core 16 on socket 1 00:06:31.120 EAL: Detected lcore 64 as core 17 on socket 1 00:06:31.120 EAL: Detected lcore 65 as core 18 on socket 1 00:06:31.120 EAL: Detected lcore 66 as core 19 on socket 1 00:06:31.120 EAL: Detected lcore 67 as core 20 on socket 1 00:06:31.120 EAL: Detected lcore 68 as core 24 on socket 1 00:06:31.120 EAL: Detected lcore 69 as core 25 on socket 1 00:06:31.120 EAL: Detected lcore 70 as core 26 on socket 1 00:06:31.120 EAL: Detected lcore 71 as core 27 on socket 1 00:06:31.120 EAL: Maximum logical cores by configuration: 128 00:06:31.120 EAL: Detected CPU lcores: 72 00:06:31.120 EAL: Detected NUMA nodes: 2 00:06:31.120 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:31.120 EAL: Checking presence of .so 'librte_eal.so.24' 00:06:31.120 EAL: Checking presence of .so 'librte_eal.so' 00:06:31.120 EAL: Detected static linkage of DPDK 00:06:31.120 EAL: No shared files mode enabled, IPC will be disabled 00:06:31.120 EAL: Bus pci wants IOVA as 'DC' 00:06:31.120 EAL: Buses did not request a specific IOVA mode. 00:06:31.120 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:31.120 EAL: Selected IOVA mode 'VA' 00:06:31.120 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.120 EAL: Probing VFIO support... 00:06:31.120 EAL: IOMMU type 1 (Type 1) is supported 00:06:31.120 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:31.120 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:31.120 EAL: VFIO support initialized 00:06:31.120 EAL: Ask a virtual area of 0x2e000 bytes 00:06:31.120 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:31.120 EAL: Setting up physically contiguous memory... 00:06:31.120 EAL: Setting maximum number of open files to 524288 00:06:31.120 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:31.120 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:31.120 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:31.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.120 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:31.120 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.120 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:31.120 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:31.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.120 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:31.120 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.120 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:31.120 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:31.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.120 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:31.120 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.120 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:31.120 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:31.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.120 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:31.120 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:31.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.120 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:31.120 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:31.120 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:31.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.120 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:31.120 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.120 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:31.120 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:31.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.120 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:31.120 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.120 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:31.120 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:31.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.120 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:31.120 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.120 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:31.120 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:31.120 EAL: Ask a virtual area of 0x61000 bytes 00:06:31.120 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:31.120 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:31.120 EAL: Ask a virtual area of 0x400000000 bytes 00:06:31.120 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:31.120 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:31.120 EAL: Hugepages will be freed exactly as allocated. 00:06:31.120 EAL: No shared files mode enabled, IPC is disabled 00:06:31.120 EAL: No shared files mode enabled, IPC is disabled 00:06:31.120 EAL: TSC frequency is ~2300000 KHz 00:06:31.120 EAL: Main lcore 0 is ready (tid=7febc1129a00;cpuset=[0]) 00:06:31.120 EAL: Trying to obtain current memory policy. 00:06:31.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.120 EAL: Restoring previous memory policy: 0 00:06:31.120 EAL: request: mp_malloc_sync 00:06:31.120 EAL: No shared files mode enabled, IPC is disabled 00:06:31.120 EAL: Heap on socket 0 was expanded by 2MB 00:06:31.120 EAL: No shared files mode enabled, IPC is disabled 00:06:31.120 EAL: Mem event callback 'spdk:(nil)' registered 00:06:31.120 00:06:31.120 00:06:31.120 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.120 http://cunit.sourceforge.net/ 00:06:31.120 00:06:31.120 00:06:31.120 Suite: components_suite 00:06:31.120 Test: vtophys_malloc_test ...passed 00:06:31.121 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:31.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.121 EAL: Restoring previous memory policy: 4 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was expanded by 4MB 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was shrunk by 4MB 00:06:31.121 EAL: Trying to obtain current memory policy. 00:06:31.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.121 EAL: Restoring previous memory policy: 4 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was expanded by 6MB 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was shrunk by 6MB 00:06:31.121 EAL: Trying to obtain current memory policy. 00:06:31.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.121 EAL: Restoring previous memory policy: 4 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was expanded by 10MB 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was shrunk by 10MB 00:06:31.121 EAL: Trying to obtain current memory policy. 00:06:31.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.121 EAL: Restoring previous memory policy: 4 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was expanded by 18MB 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was shrunk by 18MB 00:06:31.121 EAL: Trying to obtain current memory policy. 00:06:31.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.121 EAL: Restoring previous memory policy: 4 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was expanded by 34MB 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was shrunk by 34MB 00:06:31.121 EAL: Trying to obtain current memory policy. 00:06:31.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.121 EAL: Restoring previous memory policy: 4 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was expanded by 66MB 00:06:31.121 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.121 EAL: request: mp_malloc_sync 00:06:31.121 EAL: No shared files mode enabled, IPC is disabled 00:06:31.121 EAL: Heap on socket 0 was shrunk by 66MB 00:06:31.121 EAL: Trying to obtain current memory policy. 00:06:31.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.380 EAL: Restoring previous memory policy: 4 00:06:31.380 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.380 EAL: request: mp_malloc_sync 00:06:31.380 EAL: No shared files mode enabled, IPC is disabled 00:06:31.380 EAL: Heap on socket 0 was expanded by 130MB 00:06:31.380 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.380 EAL: request: mp_malloc_sync 00:06:31.380 EAL: No shared files mode enabled, IPC is disabled 00:06:31.380 EAL: Heap on socket 0 was shrunk by 130MB 00:06:31.380 EAL: Trying to obtain current memory policy. 00:06:31.380 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.380 EAL: Restoring previous memory policy: 4 00:06:31.380 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.380 EAL: request: mp_malloc_sync 00:06:31.380 EAL: No shared files mode enabled, IPC is disabled 00:06:31.380 EAL: Heap on socket 0 was expanded by 258MB 00:06:31.380 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.380 EAL: request: mp_malloc_sync 00:06:31.380 EAL: No shared files mode enabled, IPC is disabled 00:06:31.380 EAL: Heap on socket 0 was shrunk by 258MB 00:06:31.380 EAL: Trying to obtain current memory policy. 00:06:31.380 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.639 EAL: Restoring previous memory policy: 4 00:06:31.639 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.639 EAL: request: mp_malloc_sync 00:06:31.639 EAL: No shared files mode enabled, IPC is disabled 00:06:31.639 EAL: Heap on socket 0 was expanded by 514MB 00:06:31.639 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.639 EAL: request: mp_malloc_sync 00:06:31.640 EAL: No shared files mode enabled, IPC is disabled 00:06:31.640 EAL: Heap on socket 0 was shrunk by 514MB 00:06:31.640 EAL: Trying to obtain current memory policy. 00:06:31.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.899 EAL: Restoring previous memory policy: 4 00:06:31.899 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.899 EAL: request: mp_malloc_sync 00:06:31.899 EAL: No shared files mode enabled, IPC is disabled 00:06:31.899 EAL: Heap on socket 0 was expanded by 1026MB 00:06:32.158 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.417 EAL: request: mp_malloc_sync 00:06:32.417 EAL: No shared files mode enabled, IPC is disabled 00:06:32.417 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:32.417 passed 00:06:32.417 00:06:32.417 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.417 suites 1 1 n/a 0 0 00:06:32.417 tests 2 2 2 0 0 00:06:32.417 asserts 497 497 497 0 n/a 00:06:32.417 00:06:32.417 Elapsed time = 1.154 seconds 00:06:32.417 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.417 EAL: request: mp_malloc_sync 00:06:32.417 EAL: No shared files mode enabled, IPC is disabled 00:06:32.417 EAL: Heap on socket 0 was shrunk by 2MB 00:06:32.417 EAL: No shared files mode enabled, IPC is disabled 00:06:32.417 EAL: No shared files mode enabled, IPC is disabled 00:06:32.417 EAL: No shared files mode enabled, IPC is disabled 00:06:32.417 00:06:32.417 real 0m1.330s 00:06:32.417 user 0m0.765s 00:06:32.417 sys 0m0.537s 00:06:32.417 13:48:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.417 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:06:32.417 ************************************ 00:06:32.417 END TEST env_vtophys 00:06:32.417 ************************************ 00:06:32.417 13:48:23 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:32.417 13:48:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.417 13:48:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.417 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:06:32.417 ************************************ 00:06:32.417 START TEST env_pci 00:06:32.417 ************************************ 00:06:32.417 13:48:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:06:32.417 00:06:32.417 00:06:32.417 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.417 http://cunit.sourceforge.net/ 00:06:32.417 00:06:32.417 00:06:32.417 Suite: pci 00:06:32.417 Test: pci_hook ...[2024-07-23 13:48:23.312741] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3888596 has claimed it 00:06:32.417 EAL: Cannot find device (10000:00:01.0) 00:06:32.417 EAL: Failed to attach device on primary process 00:06:32.417 passed 00:06:32.417 00:06:32.417 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.417 suites 1 1 n/a 0 0 00:06:32.417 tests 1 1 1 0 0 00:06:32.417 asserts 25 25 25 0 n/a 00:06:32.417 00:06:32.417 Elapsed time = 0.050 seconds 00:06:32.417 00:06:32.417 real 0m0.070s 00:06:32.417 user 0m0.017s 00:06:32.417 sys 0m0.053s 00:06:32.417 13:48:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.417 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:06:32.417 ************************************ 00:06:32.417 END TEST env_pci 00:06:32.417 ************************************ 00:06:32.417 13:48:23 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:32.417 13:48:23 -- env/env.sh@15 -- # uname 00:06:32.417 13:48:23 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:32.417 13:48:23 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:32.417 13:48:23 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.417 13:48:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:32.417 13:48:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.417 13:48:23 -- common/autotest_common.sh@10 -- # set +x 00:06:32.417 ************************************ 00:06:32.417 START TEST env_dpdk_post_init 00:06:32.417 ************************************ 00:06:32.417 13:48:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:32.676 EAL: Detected CPU lcores: 72 00:06:32.676 EAL: Detected NUMA nodes: 2 00:06:32.676 EAL: Detected static linkage of DPDK 00:06:32.676 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:32.676 EAL: Selected IOVA mode 'VA' 00:06:32.676 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.676 EAL: VFIO support initialized 00:06:32.676 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:32.676 EAL: Using IOMMU type 1 (Type 1) 00:06:33.612 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:06:38.883 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:06:38.884 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:06:39.143 Starting DPDK initialization... 00:06:39.143 Starting SPDK post initialization... 00:06:39.143 SPDK NVMe probe 00:06:39.143 Attaching to 0000:1a:00.0 00:06:39.143 Attached to 0000:1a:00.0 00:06:39.143 Cleaning up... 00:06:39.143 00:06:39.143 real 0m6.551s 00:06:39.143 user 0m4.958s 00:06:39.143 sys 0m0.839s 00:06:39.143 13:48:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.143 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:06:39.143 ************************************ 00:06:39.143 END TEST env_dpdk_post_init 00:06:39.143 ************************************ 00:06:39.143 13:48:30 -- env/env.sh@26 -- # uname 00:06:39.143 13:48:30 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:39.143 13:48:30 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:39.143 13:48:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.143 13:48:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.143 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.143 ************************************ 00:06:39.143 START TEST env_mem_callbacks 00:06:39.143 ************************************ 00:06:39.143 13:48:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:39.143 EAL: Detected CPU lcores: 72 00:06:39.143 EAL: Detected NUMA nodes: 2 00:06:39.143 EAL: Detected static linkage of DPDK 00:06:39.144 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:39.144 EAL: Selected IOVA mode 'VA' 00:06:39.144 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.144 EAL: VFIO support initialized 00:06:39.144 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:39.144 00:06:39.144 00:06:39.144 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.144 http://cunit.sourceforge.net/ 00:06:39.144 00:06:39.144 00:06:39.144 Suite: memory 00:06:39.144 Test: test ... 00:06:39.144 register 0x200000200000 2097152 00:06:39.144 malloc 3145728 00:06:39.144 register 0x200000400000 4194304 00:06:39.144 buf 0x200000500000 len 3145728 PASSED 00:06:39.144 malloc 64 00:06:39.144 buf 0x2000004fff40 len 64 PASSED 00:06:39.144 malloc 4194304 00:06:39.144 register 0x200000800000 6291456 00:06:39.144 buf 0x200000a00000 len 4194304 PASSED 00:06:39.144 free 0x200000500000 3145728 00:06:39.144 free 0x2000004fff40 64 00:06:39.144 unregister 0x200000400000 4194304 PASSED 00:06:39.144 free 0x200000a00000 4194304 00:06:39.144 unregister 0x200000800000 6291456 PASSED 00:06:39.144 malloc 8388608 00:06:39.144 register 0x200000400000 10485760 00:06:39.144 buf 0x200000600000 len 8388608 PASSED 00:06:39.144 free 0x200000600000 8388608 00:06:39.144 unregister 0x200000400000 10485760 PASSED 00:06:39.144 passed 00:06:39.144 00:06:39.144 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.144 suites 1 1 n/a 0 0 00:06:39.144 tests 1 1 1 0 0 00:06:39.144 asserts 15 15 15 0 n/a 00:06:39.144 00:06:39.144 Elapsed time = 0.005 seconds 00:06:39.144 00:06:39.144 real 0m0.085s 00:06:39.144 user 0m0.022s 00:06:39.144 sys 0m0.063s 00:06:39.144 13:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.144 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.144 ************************************ 00:06:39.144 END TEST env_mem_callbacks 00:06:39.144 ************************************ 00:06:39.144 00:06:39.144 real 0m8.541s 00:06:39.144 user 0m6.020s 00:06:39.144 sys 0m1.792s 00:06:39.144 13:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.144 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.144 ************************************ 00:06:39.144 END TEST env 00:06:39.144 ************************************ 00:06:39.404 13:48:30 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:39.404 13:48:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.404 13:48:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.404 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.404 ************************************ 00:06:39.404 START TEST rpc 00:06:39.404 ************************************ 00:06:39.404 13:48:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:06:39.404 * Looking for test storage... 00:06:39.404 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:39.404 13:48:30 -- rpc/rpc.sh@65 -- # spdk_pid=3889716 00:06:39.404 13:48:30 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.404 13:48:30 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:39.404 13:48:30 -- rpc/rpc.sh@67 -- # waitforlisten 3889716 00:06:39.404 13:48:30 -- common/autotest_common.sh@819 -- # '[' -z 3889716 ']' 00:06:39.404 13:48:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.404 13:48:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.404 13:48:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.404 13:48:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.404 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.404 [2024-07-23 13:48:30.340255] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:39.404 [2024-07-23 13:48:30.340333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889716 ] 00:06:39.405 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.665 [2024-07-23 13:48:30.453836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.665 [2024-07-23 13:48:30.551630] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.665 [2024-07-23 13:48:30.551769] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:39.665 [2024-07-23 13:48:30.551786] app.c: 492:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3889716' to capture a snapshot of events at runtime. 00:06:39.665 [2024-07-23 13:48:30.551800] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3889716 for offline analysis/debug. 00:06:39.665 [2024-07-23 13:48:30.551825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.604 13:48:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.604 13:48:31 -- common/autotest_common.sh@852 -- # return 0 00:06:40.604 13:48:31 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:40.605 13:48:31 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:06:40.605 13:48:31 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:40.605 13:48:31 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:40.605 13:48:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.605 13:48:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.605 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.605 ************************************ 00:06:40.605 START TEST rpc_integrity 00:06:40.605 ************************************ 00:06:40.605 13:48:31 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:40.605 13:48:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:40.605 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.605 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.605 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.605 13:48:31 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:40.605 13:48:31 -- rpc/rpc.sh@13 -- # jq length 00:06:40.605 13:48:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:40.605 13:48:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:40.605 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.605 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.605 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.605 13:48:31 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:40.605 13:48:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:40.605 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.605 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.605 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.605 13:48:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:40.605 { 00:06:40.605 "name": "Malloc0", 00:06:40.605 "aliases": [ 00:06:40.605 "dd8ef57f-b181-4471-809e-ba6a01e54606" 00:06:40.605 ], 00:06:40.605 "product_name": "Malloc disk", 00:06:40.605 "block_size": 512, 00:06:40.605 "num_blocks": 16384, 00:06:40.605 "uuid": "dd8ef57f-b181-4471-809e-ba6a01e54606", 00:06:40.605 "assigned_rate_limits": { 00:06:40.605 "rw_ios_per_sec": 0, 00:06:40.605 "rw_mbytes_per_sec": 0, 00:06:40.605 "r_mbytes_per_sec": 0, 00:06:40.605 "w_mbytes_per_sec": 0 00:06:40.605 }, 00:06:40.605 "claimed": false, 00:06:40.605 "zoned": false, 00:06:40.605 "supported_io_types": { 00:06:40.605 "read": true, 00:06:40.605 "write": true, 00:06:40.605 "unmap": true, 00:06:40.605 "write_zeroes": true, 00:06:40.605 "flush": true, 00:06:40.605 "reset": true, 00:06:40.605 "compare": false, 00:06:40.605 "compare_and_write": false, 00:06:40.605 "abort": true, 00:06:40.605 "nvme_admin": false, 00:06:40.605 "nvme_io": false 00:06:40.605 }, 00:06:40.605 "memory_domains": [ 00:06:40.605 { 00:06:40.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.605 "dma_device_type": 2 00:06:40.605 } 00:06:40.605 ], 00:06:40.605 "driver_specific": {} 00:06:40.605 } 00:06:40.605 ]' 00:06:40.605 13:48:31 -- rpc/rpc.sh@17 -- # jq length 00:06:40.605 13:48:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:40.605 13:48:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:40.605 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.605 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.605 [2024-07-23 13:48:31.444456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:40.605 [2024-07-23 13:48:31.444500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.605 [2024-07-23 13:48:31.444522] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5a915a0 00:06:40.605 [2024-07-23 13:48:31.444535] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.605 [2024-07-23 13:48:31.445768] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.605 [2024-07-23 13:48:31.445797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:40.605 Passthru0 00:06:40.605 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.605 13:48:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:40.605 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.605 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.605 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.605 13:48:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:40.605 { 00:06:40.605 "name": "Malloc0", 00:06:40.605 "aliases": [ 00:06:40.605 "dd8ef57f-b181-4471-809e-ba6a01e54606" 00:06:40.605 ], 00:06:40.605 "product_name": "Malloc disk", 00:06:40.605 "block_size": 512, 00:06:40.605 "num_blocks": 16384, 00:06:40.605 "uuid": "dd8ef57f-b181-4471-809e-ba6a01e54606", 00:06:40.605 "assigned_rate_limits": { 00:06:40.605 "rw_ios_per_sec": 0, 00:06:40.605 "rw_mbytes_per_sec": 0, 00:06:40.605 "r_mbytes_per_sec": 0, 00:06:40.605 "w_mbytes_per_sec": 0 00:06:40.605 }, 00:06:40.605 "claimed": true, 00:06:40.605 "claim_type": "exclusive_write", 00:06:40.605 "zoned": false, 00:06:40.605 "supported_io_types": { 00:06:40.605 "read": true, 00:06:40.605 "write": true, 00:06:40.605 "unmap": true, 00:06:40.605 "write_zeroes": true, 00:06:40.605 "flush": true, 00:06:40.605 "reset": true, 00:06:40.605 "compare": false, 00:06:40.605 "compare_and_write": false, 00:06:40.605 "abort": true, 00:06:40.605 "nvme_admin": false, 00:06:40.605 "nvme_io": false 00:06:40.605 }, 00:06:40.605 "memory_domains": [ 00:06:40.605 { 00:06:40.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.606 "dma_device_type": 2 00:06:40.606 } 00:06:40.606 ], 00:06:40.606 "driver_specific": {} 00:06:40.606 }, 00:06:40.606 { 00:06:40.606 "name": "Passthru0", 00:06:40.606 "aliases": [ 00:06:40.606 "93513379-5dc7-5473-9290-00dd1a7d55b1" 00:06:40.606 ], 00:06:40.606 "product_name": "passthru", 00:06:40.606 "block_size": 512, 00:06:40.606 "num_blocks": 16384, 00:06:40.606 "uuid": "93513379-5dc7-5473-9290-00dd1a7d55b1", 00:06:40.606 "assigned_rate_limits": { 00:06:40.606 "rw_ios_per_sec": 0, 00:06:40.606 "rw_mbytes_per_sec": 0, 00:06:40.606 "r_mbytes_per_sec": 0, 00:06:40.606 "w_mbytes_per_sec": 0 00:06:40.606 }, 00:06:40.606 "claimed": false, 00:06:40.606 "zoned": false, 00:06:40.606 "supported_io_types": { 00:06:40.606 "read": true, 00:06:40.606 "write": true, 00:06:40.606 "unmap": true, 00:06:40.606 "write_zeroes": true, 00:06:40.606 "flush": true, 00:06:40.606 "reset": true, 00:06:40.606 "compare": false, 00:06:40.606 "compare_and_write": false, 00:06:40.606 "abort": true, 00:06:40.606 "nvme_admin": false, 00:06:40.606 "nvme_io": false 00:06:40.606 }, 00:06:40.606 "memory_domains": [ 00:06:40.606 { 00:06:40.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.606 "dma_device_type": 2 00:06:40.606 } 00:06:40.606 ], 00:06:40.606 "driver_specific": { 00:06:40.606 "passthru": { 00:06:40.606 "name": "Passthru0", 00:06:40.606 "base_bdev_name": "Malloc0" 00:06:40.606 } 00:06:40.606 } 00:06:40.606 } 00:06:40.606 ]' 00:06:40.606 13:48:31 -- rpc/rpc.sh@21 -- # jq length 00:06:40.606 13:48:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:40.606 13:48:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:40.606 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.606 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.606 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.606 13:48:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:40.606 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.606 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.606 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.606 13:48:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:40.606 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.606 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.606 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.606 13:48:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:40.606 13:48:31 -- rpc/rpc.sh@26 -- # jq length 00:06:40.606 13:48:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:40.606 00:06:40.606 real 0m0.294s 00:06:40.606 user 0m0.192s 00:06:40.606 sys 0m0.042s 00:06:40.606 13:48:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.606 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.606 ************************************ 00:06:40.606 END TEST rpc_integrity 00:06:40.606 ************************************ 00:06:40.866 13:48:31 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:40.866 13:48:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.866 13:48:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.866 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.866 ************************************ 00:06:40.866 START TEST rpc_plugins 00:06:40.866 ************************************ 00:06:40.866 13:48:31 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:06:40.866 13:48:31 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:40.866 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.866 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.866 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.866 13:48:31 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:40.866 13:48:31 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:40.866 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.866 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.866 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.866 13:48:31 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:40.866 { 00:06:40.866 "name": "Malloc1", 00:06:40.866 "aliases": [ 00:06:40.866 "b7c02715-250f-4396-a09c-ffac2bfc4961" 00:06:40.866 ], 00:06:40.866 "product_name": "Malloc disk", 00:06:40.866 "block_size": 4096, 00:06:40.866 "num_blocks": 256, 00:06:40.866 "uuid": "b7c02715-250f-4396-a09c-ffac2bfc4961", 00:06:40.866 "assigned_rate_limits": { 00:06:40.866 "rw_ios_per_sec": 0, 00:06:40.866 "rw_mbytes_per_sec": 0, 00:06:40.866 "r_mbytes_per_sec": 0, 00:06:40.866 "w_mbytes_per_sec": 0 00:06:40.866 }, 00:06:40.866 "claimed": false, 00:06:40.866 "zoned": false, 00:06:40.866 "supported_io_types": { 00:06:40.866 "read": true, 00:06:40.866 "write": true, 00:06:40.866 "unmap": true, 00:06:40.866 "write_zeroes": true, 00:06:40.866 "flush": true, 00:06:40.866 "reset": true, 00:06:40.866 "compare": false, 00:06:40.866 "compare_and_write": false, 00:06:40.866 "abort": true, 00:06:40.866 "nvme_admin": false, 00:06:40.866 "nvme_io": false 00:06:40.866 }, 00:06:40.866 "memory_domains": [ 00:06:40.866 { 00:06:40.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.866 "dma_device_type": 2 00:06:40.866 } 00:06:40.866 ], 00:06:40.866 "driver_specific": {} 00:06:40.866 } 00:06:40.866 ]' 00:06:40.866 13:48:31 -- rpc/rpc.sh@32 -- # jq length 00:06:40.866 13:48:31 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:40.866 13:48:31 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:40.866 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.866 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.866 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.866 13:48:31 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:40.866 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.866 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.866 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.866 13:48:31 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:40.866 13:48:31 -- rpc/rpc.sh@36 -- # jq length 00:06:40.866 13:48:31 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:40.866 00:06:40.866 real 0m0.149s 00:06:40.866 user 0m0.093s 00:06:40.866 sys 0m0.020s 00:06:40.866 13:48:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.866 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.866 ************************************ 00:06:40.866 END TEST rpc_plugins 00:06:40.866 ************************************ 00:06:40.866 13:48:31 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:40.866 13:48:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.866 13:48:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.866 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.866 ************************************ 00:06:40.867 START TEST rpc_trace_cmd_test 00:06:40.867 ************************************ 00:06:40.867 13:48:31 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:06:40.867 13:48:31 -- rpc/rpc.sh@40 -- # local info 00:06:40.867 13:48:31 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:40.867 13:48:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:40.867 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.867 13:48:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:40.867 13:48:31 -- rpc/rpc.sh@42 -- # info='{ 00:06:40.867 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3889716", 00:06:40.867 "tpoint_group_mask": "0x8", 00:06:40.867 "iscsi_conn": { 00:06:40.867 "mask": "0x2", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 }, 00:06:40.867 "scsi": { 00:06:40.867 "mask": "0x4", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 }, 00:06:40.867 "bdev": { 00:06:40.867 "mask": "0x8", 00:06:40.867 "tpoint_mask": "0xffffffffffffffff" 00:06:40.867 }, 00:06:40.867 "nvmf_rdma": { 00:06:40.867 "mask": "0x10", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 }, 00:06:40.867 "nvmf_tcp": { 00:06:40.867 "mask": "0x20", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 }, 00:06:40.867 "ftl": { 00:06:40.867 "mask": "0x40", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 }, 00:06:40.867 "blobfs": { 00:06:40.867 "mask": "0x80", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 }, 00:06:40.867 "dsa": { 00:06:40.867 "mask": "0x200", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 }, 00:06:40.867 "thread": { 00:06:40.867 "mask": "0x400", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 }, 00:06:40.867 "nvme_pcie": { 00:06:40.867 "mask": "0x800", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 }, 00:06:40.867 "iaa": { 00:06:40.867 "mask": "0x1000", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 }, 00:06:40.867 "nvme_tcp": { 00:06:40.867 "mask": "0x2000", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 }, 00:06:40.867 "bdev_nvme": { 00:06:40.867 "mask": "0x4000", 00:06:40.867 "tpoint_mask": "0x0" 00:06:40.867 } 00:06:40.867 }' 00:06:40.867 13:48:31 -- rpc/rpc.sh@43 -- # jq length 00:06:41.126 13:48:31 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:41.126 13:48:31 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:41.126 13:48:31 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:41.127 13:48:31 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:41.127 13:48:31 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:41.127 13:48:31 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:41.127 13:48:32 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:41.127 13:48:32 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:41.127 13:48:32 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:41.127 00:06:41.127 real 0m0.241s 00:06:41.127 user 0m0.203s 00:06:41.127 sys 0m0.032s 00:06:41.127 13:48:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.127 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.127 ************************************ 00:06:41.127 END TEST rpc_trace_cmd_test 00:06:41.127 ************************************ 00:06:41.127 13:48:32 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:41.127 13:48:32 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:41.127 13:48:32 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:41.127 13:48:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.127 13:48:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.127 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.127 ************************************ 00:06:41.127 START TEST rpc_daemon_integrity 00:06:41.127 ************************************ 00:06:41.127 13:48:32 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:41.127 13:48:32 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:41.127 13:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.127 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.127 13:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.127 13:48:32 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:41.387 13:48:32 -- rpc/rpc.sh@13 -- # jq length 00:06:41.387 13:48:32 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:41.387 13:48:32 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:41.387 13:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.387 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.387 13:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.387 13:48:32 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:41.387 13:48:32 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:41.387 13:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.387 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.387 13:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.387 13:48:32 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:41.387 { 00:06:41.387 "name": "Malloc2", 00:06:41.387 "aliases": [ 00:06:41.387 "9ef94f1b-6306-48e9-95ae-0febf5efd346" 00:06:41.387 ], 00:06:41.387 "product_name": "Malloc disk", 00:06:41.387 "block_size": 512, 00:06:41.387 "num_blocks": 16384, 00:06:41.387 "uuid": "9ef94f1b-6306-48e9-95ae-0febf5efd346", 00:06:41.387 "assigned_rate_limits": { 00:06:41.387 "rw_ios_per_sec": 0, 00:06:41.387 "rw_mbytes_per_sec": 0, 00:06:41.387 "r_mbytes_per_sec": 0, 00:06:41.387 "w_mbytes_per_sec": 0 00:06:41.387 }, 00:06:41.387 "claimed": false, 00:06:41.387 "zoned": false, 00:06:41.387 "supported_io_types": { 00:06:41.387 "read": true, 00:06:41.387 "write": true, 00:06:41.387 "unmap": true, 00:06:41.387 "write_zeroes": true, 00:06:41.387 "flush": true, 00:06:41.387 "reset": true, 00:06:41.387 "compare": false, 00:06:41.387 "compare_and_write": false, 00:06:41.387 "abort": true, 00:06:41.387 "nvme_admin": false, 00:06:41.387 "nvme_io": false 00:06:41.387 }, 00:06:41.387 "memory_domains": [ 00:06:41.387 { 00:06:41.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.387 "dma_device_type": 2 00:06:41.387 } 00:06:41.387 ], 00:06:41.387 "driver_specific": {} 00:06:41.387 } 00:06:41.387 ]' 00:06:41.387 13:48:32 -- rpc/rpc.sh@17 -- # jq length 00:06:41.387 13:48:32 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:41.387 13:48:32 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:41.387 13:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.387 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.387 [2024-07-23 13:48:32.278705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:41.387 [2024-07-23 13:48:32.278745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.387 [2024-07-23 13:48:32.278767] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5c2b0d0 00:06:41.387 [2024-07-23 13:48:32.278780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.387 [2024-07-23 13:48:32.279771] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.387 [2024-07-23 13:48:32.279800] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:41.387 Passthru0 00:06:41.387 13:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.387 13:48:32 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:41.387 13:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.387 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.387 13:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.387 13:48:32 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:41.387 { 00:06:41.387 "name": "Malloc2", 00:06:41.387 "aliases": [ 00:06:41.387 "9ef94f1b-6306-48e9-95ae-0febf5efd346" 00:06:41.387 ], 00:06:41.387 "product_name": "Malloc disk", 00:06:41.387 "block_size": 512, 00:06:41.387 "num_blocks": 16384, 00:06:41.387 "uuid": "9ef94f1b-6306-48e9-95ae-0febf5efd346", 00:06:41.387 "assigned_rate_limits": { 00:06:41.387 "rw_ios_per_sec": 0, 00:06:41.387 "rw_mbytes_per_sec": 0, 00:06:41.387 "r_mbytes_per_sec": 0, 00:06:41.387 "w_mbytes_per_sec": 0 00:06:41.387 }, 00:06:41.387 "claimed": true, 00:06:41.387 "claim_type": "exclusive_write", 00:06:41.387 "zoned": false, 00:06:41.387 "supported_io_types": { 00:06:41.387 "read": true, 00:06:41.387 "write": true, 00:06:41.387 "unmap": true, 00:06:41.387 "write_zeroes": true, 00:06:41.387 "flush": true, 00:06:41.387 "reset": true, 00:06:41.387 "compare": false, 00:06:41.387 "compare_and_write": false, 00:06:41.387 "abort": true, 00:06:41.387 "nvme_admin": false, 00:06:41.387 "nvme_io": false 00:06:41.387 }, 00:06:41.387 "memory_domains": [ 00:06:41.387 { 00:06:41.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.387 "dma_device_type": 2 00:06:41.387 } 00:06:41.387 ], 00:06:41.387 "driver_specific": {} 00:06:41.387 }, 00:06:41.387 { 00:06:41.388 "name": "Passthru0", 00:06:41.388 "aliases": [ 00:06:41.388 "97eeb511-d38c-5ffa-b2f9-00748f51eddb" 00:06:41.388 ], 00:06:41.388 "product_name": "passthru", 00:06:41.388 "block_size": 512, 00:06:41.388 "num_blocks": 16384, 00:06:41.388 "uuid": "97eeb511-d38c-5ffa-b2f9-00748f51eddb", 00:06:41.388 "assigned_rate_limits": { 00:06:41.388 "rw_ios_per_sec": 0, 00:06:41.388 "rw_mbytes_per_sec": 0, 00:06:41.388 "r_mbytes_per_sec": 0, 00:06:41.388 "w_mbytes_per_sec": 0 00:06:41.388 }, 00:06:41.388 "claimed": false, 00:06:41.388 "zoned": false, 00:06:41.388 "supported_io_types": { 00:06:41.388 "read": true, 00:06:41.388 "write": true, 00:06:41.388 "unmap": true, 00:06:41.388 "write_zeroes": true, 00:06:41.388 "flush": true, 00:06:41.388 "reset": true, 00:06:41.388 "compare": false, 00:06:41.388 "compare_and_write": false, 00:06:41.388 "abort": true, 00:06:41.388 "nvme_admin": false, 00:06:41.388 "nvme_io": false 00:06:41.388 }, 00:06:41.388 "memory_domains": [ 00:06:41.388 { 00:06:41.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.388 "dma_device_type": 2 00:06:41.388 } 00:06:41.388 ], 00:06:41.388 "driver_specific": { 00:06:41.388 "passthru": { 00:06:41.388 "name": "Passthru0", 00:06:41.388 "base_bdev_name": "Malloc2" 00:06:41.388 } 00:06:41.388 } 00:06:41.388 } 00:06:41.388 ]' 00:06:41.388 13:48:32 -- rpc/rpc.sh@21 -- # jq length 00:06:41.388 13:48:32 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:41.388 13:48:32 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:41.388 13:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.388 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.388 13:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.388 13:48:32 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:41.388 13:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.388 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.388 13:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.388 13:48:32 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:41.388 13:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.388 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.388 13:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.388 13:48:32 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:41.388 13:48:32 -- rpc/rpc.sh@26 -- # jq length 00:06:41.647 13:48:32 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:41.648 00:06:41.648 real 0m0.285s 00:06:41.648 user 0m0.184s 00:06:41.648 sys 0m0.043s 00:06:41.648 13:48:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.648 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.648 ************************************ 00:06:41.648 END TEST rpc_daemon_integrity 00:06:41.648 ************************************ 00:06:41.648 13:48:32 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:41.648 13:48:32 -- rpc/rpc.sh@84 -- # killprocess 3889716 00:06:41.648 13:48:32 -- common/autotest_common.sh@926 -- # '[' -z 3889716 ']' 00:06:41.648 13:48:32 -- common/autotest_common.sh@930 -- # kill -0 3889716 00:06:41.648 13:48:32 -- common/autotest_common.sh@931 -- # uname 00:06:41.648 13:48:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.648 13:48:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3889716 00:06:41.648 13:48:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.648 13:48:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.648 13:48:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3889716' 00:06:41.648 killing process with pid 3889716 00:06:41.648 13:48:32 -- common/autotest_common.sh@945 -- # kill 3889716 00:06:41.648 13:48:32 -- common/autotest_common.sh@950 -- # wait 3889716 00:06:41.906 00:06:41.906 real 0m2.645s 00:06:41.906 user 0m3.399s 00:06:41.906 sys 0m0.789s 00:06:41.906 13:48:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.906 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.906 ************************************ 00:06:41.906 END TEST rpc 00:06:41.906 ************************************ 00:06:41.906 13:48:32 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:41.906 13:48:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.906 13:48:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.906 13:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:41.906 ************************************ 00:06:41.906 START TEST rpc_client 00:06:41.906 ************************************ 00:06:41.906 13:48:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:42.166 * Looking for test storage... 00:06:42.166 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:06:42.166 13:48:33 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:42.166 OK 00:06:42.166 13:48:33 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:42.166 00:06:42.166 real 0m0.131s 00:06:42.166 user 0m0.054s 00:06:42.166 sys 0m0.088s 00:06:42.166 13:48:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.166 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:06:42.166 ************************************ 00:06:42.166 END TEST rpc_client 00:06:42.166 ************************************ 00:06:42.166 13:48:33 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:42.166 13:48:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.166 13:48:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.166 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:06:42.167 ************************************ 00:06:42.167 START TEST json_config 00:06:42.167 ************************************ 00:06:42.167 13:48:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:06:42.167 13:48:33 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.167 13:48:33 -- nvmf/common.sh@7 -- # uname -s 00:06:42.167 13:48:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.167 13:48:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.167 13:48:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.167 13:48:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.167 13:48:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.167 13:48:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.167 13:48:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.167 13:48:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.167 13:48:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.167 13:48:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.167 13:48:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:06:42.167 13:48:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:06:42.167 13:48:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.167 13:48:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.167 13:48:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:42.167 13:48:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:42.167 13:48:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.167 13:48:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.167 13:48:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.167 13:48:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.167 13:48:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.167 13:48:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.167 13:48:33 -- paths/export.sh@5 -- # export PATH 00:06:42.167 13:48:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.167 13:48:33 -- nvmf/common.sh@46 -- # : 0 00:06:42.167 13:48:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:42.167 13:48:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:42.167 13:48:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:42.167 13:48:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.167 13:48:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.167 13:48:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:42.167 13:48:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:42.167 13:48:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:42.427 13:48:33 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:42.427 13:48:33 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:42.427 13:48:33 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:42.427 13:48:33 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:42.427 13:48:33 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:42.427 WARNING: No tests are enabled so not running JSON configuration tests 00:06:42.427 13:48:33 -- json_config/json_config.sh@27 -- # exit 0 00:06:42.427 00:06:42.427 real 0m0.103s 00:06:42.427 user 0m0.049s 00:06:42.427 sys 0m0.055s 00:06:42.427 13:48:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.427 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:06:42.427 ************************************ 00:06:42.427 END TEST json_config 00:06:42.427 ************************************ 00:06:42.427 13:48:33 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:42.427 13:48:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.427 13:48:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.427 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:06:42.427 ************************************ 00:06:42.427 START TEST json_config_extra_key 00:06:42.427 ************************************ 00:06:42.427 13:48:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:42.427 13:48:33 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.427 13:48:33 -- nvmf/common.sh@7 -- # uname -s 00:06:42.427 13:48:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.427 13:48:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.427 13:48:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.427 13:48:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.427 13:48:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.427 13:48:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.427 13:48:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.427 13:48:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.427 13:48:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.427 13:48:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.427 13:48:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:06:42.427 13:48:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:06:42.427 13:48:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.427 13:48:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.427 13:48:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:42.427 13:48:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:06:42.427 13:48:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.427 13:48:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.428 13:48:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.428 13:48:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.428 13:48:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.428 13:48:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.428 13:48:33 -- paths/export.sh@5 -- # export PATH 00:06:42.428 13:48:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.428 13:48:33 -- nvmf/common.sh@46 -- # : 0 00:06:42.428 13:48:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:42.428 13:48:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:42.428 13:48:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:42.428 13:48:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.428 13:48:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.428 13:48:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:42.428 13:48:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:42.428 13:48:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:42.428 INFO: launching applications... 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3890357 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:42.428 Waiting for target to run... 00:06:42.428 13:48:33 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3890357 /var/tmp/spdk_tgt.sock 00:06:42.428 13:48:33 -- common/autotest_common.sh@819 -- # '[' -z 3890357 ']' 00:06:42.428 13:48:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:42.428 13:48:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.428 13:48:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:42.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:42.428 13:48:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.428 13:48:33 -- common/autotest_common.sh@10 -- # set +x 00:06:42.428 [2024-07-23 13:48:33.359784] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:42.428 [2024-07-23 13:48:33.359862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890357 ] 00:06:42.428 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.745 [2024-07-23 13:48:33.726610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.005 [2024-07-23 13:48:33.811825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:43.005 [2024-07-23 13:48:33.811960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.573 13:48:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.573 13:48:34 -- common/autotest_common.sh@852 -- # return 0 00:06:43.573 13:48:34 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:43.573 00:06:43.573 13:48:34 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:43.573 INFO: shutting down applications... 00:06:43.573 13:48:34 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:43.573 13:48:34 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:43.573 13:48:34 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:43.573 13:48:34 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3890357 ]] 00:06:43.573 13:48:34 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3890357 00:06:43.573 13:48:34 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:43.573 13:48:34 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:43.573 13:48:34 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3890357 00:06:43.573 13:48:34 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:43.832 13:48:34 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:43.832 13:48:34 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:43.832 13:48:34 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3890357 00:06:43.832 13:48:34 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:43.832 13:48:34 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:43.832 13:48:34 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:43.832 13:48:34 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:43.832 SPDK target shutdown done 00:06:43.832 13:48:34 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:43.832 Success 00:06:43.832 00:06:43.832 real 0m1.587s 00:06:43.832 user 0m1.443s 00:06:43.832 sys 0m0.496s 00:06:43.832 13:48:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.832 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:06:43.832 ************************************ 00:06:43.832 END TEST json_config_extra_key 00:06:43.832 ************************************ 00:06:44.091 13:48:34 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:44.092 13:48:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.092 13:48:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.092 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:06:44.092 ************************************ 00:06:44.092 START TEST alias_rpc 00:06:44.092 ************************************ 00:06:44.092 13:48:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:44.092 * Looking for test storage... 00:06:44.092 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:06:44.092 13:48:34 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:44.092 13:48:34 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3890587 00:06:44.092 13:48:34 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3890587 00:06:44.092 13:48:34 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:44.092 13:48:34 -- common/autotest_common.sh@819 -- # '[' -z 3890587 ']' 00:06:44.092 13:48:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.092 13:48:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:44.092 13:48:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.092 13:48:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:44.092 13:48:34 -- common/autotest_common.sh@10 -- # set +x 00:06:44.092 [2024-07-23 13:48:35.012097] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:44.092 [2024-07-23 13:48:35.012224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890587 ] 00:06:44.092 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.351 [2024-07-23 13:48:35.134281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.351 [2024-07-23 13:48:35.233994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.351 [2024-07-23 13:48:35.234146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.919 13:48:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:44.919 13:48:35 -- common/autotest_common.sh@852 -- # return 0 00:06:44.919 13:48:35 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:45.178 13:48:36 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3890587 00:06:45.178 13:48:36 -- common/autotest_common.sh@926 -- # '[' -z 3890587 ']' 00:06:45.178 13:48:36 -- common/autotest_common.sh@930 -- # kill -0 3890587 00:06:45.178 13:48:36 -- common/autotest_common.sh@931 -- # uname 00:06:45.178 13:48:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.178 13:48:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3890587 00:06:45.178 13:48:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:45.178 13:48:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:45.178 13:48:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3890587' 00:06:45.178 killing process with pid 3890587 00:06:45.178 13:48:36 -- common/autotest_common.sh@945 -- # kill 3890587 00:06:45.178 13:48:36 -- common/autotest_common.sh@950 -- # wait 3890587 00:06:45.746 00:06:45.746 real 0m1.613s 00:06:45.746 user 0m1.685s 00:06:45.746 sys 0m0.532s 00:06:45.746 13:48:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.747 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.747 ************************************ 00:06:45.747 END TEST alias_rpc 00:06:45.747 ************************************ 00:06:45.747 13:48:36 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:06:45.747 13:48:36 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:45.747 13:48:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.747 13:48:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.747 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.747 ************************************ 00:06:45.747 START TEST spdkcli_tcp 00:06:45.747 ************************************ 00:06:45.747 13:48:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:45.747 * Looking for test storage... 00:06:45.747 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:06:45.747 13:48:36 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:06:45.747 13:48:36 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:45.747 13:48:36 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:06:45.747 13:48:36 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:45.747 13:48:36 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:45.747 13:48:36 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:45.747 13:48:36 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:45.747 13:48:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:45.747 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.747 13:48:36 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3890824 00:06:45.747 13:48:36 -- spdkcli/tcp.sh@27 -- # waitforlisten 3890824 00:06:45.747 13:48:36 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:45.747 13:48:36 -- common/autotest_common.sh@819 -- # '[' -z 3890824 ']' 00:06:45.747 13:48:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.747 13:48:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.747 13:48:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.747 13:48:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.747 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.747 [2024-07-23 13:48:36.679088] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:45.747 [2024-07-23 13:48:36.679168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890824 ] 00:06:45.747 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.006 [2024-07-23 13:48:36.794756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.006 [2024-07-23 13:48:36.900361] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.006 [2024-07-23 13:48:36.900561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.006 [2024-07-23 13:48:36.900566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.944 13:48:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.944 13:48:37 -- common/autotest_common.sh@852 -- # return 0 00:06:46.944 13:48:37 -- spdkcli/tcp.sh@31 -- # socat_pid=3891003 00:06:46.944 13:48:37 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:46.944 13:48:37 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:46.944 [ 00:06:46.944 "spdk_get_version", 00:06:46.944 "rpc_get_methods", 00:06:46.944 "trace_get_info", 00:06:46.944 "trace_get_tpoint_group_mask", 00:06:46.944 "trace_disable_tpoint_group", 00:06:46.944 "trace_enable_tpoint_group", 00:06:46.944 "trace_clear_tpoint_mask", 00:06:46.944 "trace_set_tpoint_mask", 00:06:46.944 "vfu_tgt_set_base_path", 00:06:46.944 "framework_get_pci_devices", 00:06:46.944 "framework_get_config", 00:06:46.944 "framework_get_subsystems", 00:06:46.944 "iobuf_get_stats", 00:06:46.944 "iobuf_set_options", 00:06:46.944 "sock_set_default_impl", 00:06:46.944 "sock_impl_set_options", 00:06:46.944 "sock_impl_get_options", 00:06:46.944 "vmd_rescan", 00:06:46.944 "vmd_remove_device", 00:06:46.944 "vmd_enable", 00:06:46.944 "accel_get_stats", 00:06:46.944 "accel_set_options", 00:06:46.944 "accel_set_driver", 00:06:46.944 "accel_crypto_key_destroy", 00:06:46.944 "accel_crypto_keys_get", 00:06:46.944 "accel_crypto_key_create", 00:06:46.944 "accel_assign_opc", 00:06:46.944 "accel_get_module_info", 00:06:46.944 "accel_get_opc_assignments", 00:06:46.944 "notify_get_notifications", 00:06:46.944 "notify_get_types", 00:06:46.944 "bdev_get_histogram", 00:06:46.944 "bdev_enable_histogram", 00:06:46.944 "bdev_set_qos_limit", 00:06:46.944 "bdev_set_qd_sampling_period", 00:06:46.944 "bdev_get_bdevs", 00:06:46.944 "bdev_reset_iostat", 00:06:46.944 "bdev_get_iostat", 00:06:46.944 "bdev_examine", 00:06:46.944 "bdev_wait_for_examine", 00:06:46.944 "bdev_set_options", 00:06:46.944 "scsi_get_devices", 00:06:46.944 "thread_set_cpumask", 00:06:46.944 "framework_get_scheduler", 00:06:46.944 "framework_set_scheduler", 00:06:46.944 "framework_get_reactors", 00:06:46.945 "thread_get_io_channels", 00:06:46.945 "thread_get_pollers", 00:06:46.945 "thread_get_stats", 00:06:46.945 "framework_monitor_context_switch", 00:06:46.945 "spdk_kill_instance", 00:06:46.945 "log_enable_timestamps", 00:06:46.945 "log_get_flags", 00:06:46.945 "log_clear_flag", 00:06:46.945 "log_set_flag", 00:06:46.945 "log_get_level", 00:06:46.945 "log_set_level", 00:06:46.945 "log_get_print_level", 00:06:46.945 "log_set_print_level", 00:06:46.945 "framework_enable_cpumask_locks", 00:06:46.945 "framework_disable_cpumask_locks", 00:06:46.945 "framework_wait_init", 00:06:46.945 "framework_start_init", 00:06:46.945 "virtio_blk_create_transport", 00:06:46.945 "virtio_blk_get_transports", 00:06:46.945 "vhost_controller_set_coalescing", 00:06:46.945 "vhost_get_controllers", 00:06:46.945 "vhost_delete_controller", 00:06:46.945 "vhost_create_blk_controller", 00:06:46.945 "vhost_scsi_controller_remove_target", 00:06:46.945 "vhost_scsi_controller_add_target", 00:06:46.945 "vhost_start_scsi_controller", 00:06:46.945 "vhost_create_scsi_controller", 00:06:46.945 "ublk_recover_disk", 00:06:46.945 "ublk_get_disks", 00:06:46.945 "ublk_stop_disk", 00:06:46.945 "ublk_start_disk", 00:06:46.945 "ublk_destroy_target", 00:06:46.945 "ublk_create_target", 00:06:46.945 "nbd_get_disks", 00:06:46.945 "nbd_stop_disk", 00:06:46.945 "nbd_start_disk", 00:06:46.945 "env_dpdk_get_mem_stats", 00:06:46.945 "nvmf_subsystem_get_listeners", 00:06:46.945 "nvmf_subsystem_get_qpairs", 00:06:46.945 "nvmf_subsystem_get_controllers", 00:06:46.945 "nvmf_get_stats", 00:06:46.945 "nvmf_get_transports", 00:06:46.945 "nvmf_create_transport", 00:06:46.945 "nvmf_get_targets", 00:06:46.945 "nvmf_delete_target", 00:06:46.945 "nvmf_create_target", 00:06:46.945 "nvmf_subsystem_allow_any_host", 00:06:46.945 "nvmf_subsystem_remove_host", 00:06:46.945 "nvmf_subsystem_add_host", 00:06:46.945 "nvmf_subsystem_remove_ns", 00:06:46.945 "nvmf_subsystem_add_ns", 00:06:46.945 "nvmf_subsystem_listener_set_ana_state", 00:06:46.945 "nvmf_discovery_get_referrals", 00:06:46.945 "nvmf_discovery_remove_referral", 00:06:46.945 "nvmf_discovery_add_referral", 00:06:46.945 "nvmf_subsystem_remove_listener", 00:06:46.945 "nvmf_subsystem_add_listener", 00:06:46.945 "nvmf_delete_subsystem", 00:06:46.945 "nvmf_create_subsystem", 00:06:46.945 "nvmf_get_subsystems", 00:06:46.945 "nvmf_set_crdt", 00:06:46.945 "nvmf_set_config", 00:06:46.945 "nvmf_set_max_subsystems", 00:06:46.945 "iscsi_set_options", 00:06:46.945 "iscsi_get_auth_groups", 00:06:46.945 "iscsi_auth_group_remove_secret", 00:06:46.945 "iscsi_auth_group_add_secret", 00:06:46.945 "iscsi_delete_auth_group", 00:06:46.945 "iscsi_create_auth_group", 00:06:46.945 "iscsi_set_discovery_auth", 00:06:46.945 "iscsi_get_options", 00:06:46.945 "iscsi_target_node_request_logout", 00:06:46.945 "iscsi_target_node_set_redirect", 00:06:46.945 "iscsi_target_node_set_auth", 00:06:46.945 "iscsi_target_node_add_lun", 00:06:46.945 "iscsi_get_connections", 00:06:46.945 "iscsi_portal_group_set_auth", 00:06:46.945 "iscsi_start_portal_group", 00:06:46.945 "iscsi_delete_portal_group", 00:06:46.945 "iscsi_create_portal_group", 00:06:46.945 "iscsi_get_portal_groups", 00:06:46.945 "iscsi_delete_target_node", 00:06:46.945 "iscsi_target_node_remove_pg_ig_maps", 00:06:46.945 "iscsi_target_node_add_pg_ig_maps", 00:06:46.945 "iscsi_create_target_node", 00:06:46.945 "iscsi_get_target_nodes", 00:06:46.945 "iscsi_delete_initiator_group", 00:06:46.945 "iscsi_initiator_group_remove_initiators", 00:06:46.945 "iscsi_initiator_group_add_initiators", 00:06:46.945 "iscsi_create_initiator_group", 00:06:46.945 "iscsi_get_initiator_groups", 00:06:46.945 "vfu_virtio_create_scsi_endpoint", 00:06:46.945 "vfu_virtio_scsi_remove_target", 00:06:46.945 "vfu_virtio_scsi_add_target", 00:06:46.945 "vfu_virtio_create_blk_endpoint", 00:06:46.945 "vfu_virtio_delete_endpoint", 00:06:46.945 "iaa_scan_accel_module", 00:06:46.945 "dsa_scan_accel_module", 00:06:46.945 "ioat_scan_accel_module", 00:06:46.945 "accel_error_inject_error", 00:06:46.945 "bdev_iscsi_delete", 00:06:46.945 "bdev_iscsi_create", 00:06:46.945 "bdev_iscsi_set_options", 00:06:46.945 "bdev_virtio_attach_controller", 00:06:46.945 "bdev_virtio_scsi_get_devices", 00:06:46.945 "bdev_virtio_detach_controller", 00:06:46.945 "bdev_virtio_blk_set_hotplug", 00:06:46.945 "bdev_ftl_set_property", 00:06:46.945 "bdev_ftl_get_properties", 00:06:46.945 "bdev_ftl_get_stats", 00:06:46.945 "bdev_ftl_unmap", 00:06:46.945 "bdev_ftl_unload", 00:06:46.945 "bdev_ftl_delete", 00:06:46.945 "bdev_ftl_load", 00:06:46.945 "bdev_ftl_create", 00:06:46.945 "bdev_aio_delete", 00:06:46.945 "bdev_aio_rescan", 00:06:46.945 "bdev_aio_create", 00:06:46.945 "blobfs_create", 00:06:46.945 "blobfs_detect", 00:06:46.945 "blobfs_set_cache_size", 00:06:46.945 "bdev_zone_block_delete", 00:06:46.945 "bdev_zone_block_create", 00:06:46.945 "bdev_delay_delete", 00:06:46.945 "bdev_delay_create", 00:06:46.945 "bdev_delay_update_latency", 00:06:46.945 "bdev_split_delete", 00:06:46.945 "bdev_split_create", 00:06:46.945 "bdev_error_inject_error", 00:06:46.945 "bdev_error_delete", 00:06:46.945 "bdev_error_create", 00:06:46.945 "bdev_raid_set_options", 00:06:46.945 "bdev_raid_remove_base_bdev", 00:06:46.945 "bdev_raid_add_base_bdev", 00:06:46.945 "bdev_raid_delete", 00:06:46.945 "bdev_raid_create", 00:06:46.945 "bdev_raid_get_bdevs", 00:06:46.945 "bdev_lvol_grow_lvstore", 00:06:46.945 "bdev_lvol_get_lvols", 00:06:46.945 "bdev_lvol_get_lvstores", 00:06:46.945 "bdev_lvol_delete", 00:06:46.945 "bdev_lvol_set_read_only", 00:06:46.945 "bdev_lvol_resize", 00:06:46.945 "bdev_lvol_decouple_parent", 00:06:46.945 "bdev_lvol_inflate", 00:06:46.945 "bdev_lvol_rename", 00:06:46.945 "bdev_lvol_clone_bdev", 00:06:46.945 "bdev_lvol_clone", 00:06:46.945 "bdev_lvol_snapshot", 00:06:46.945 "bdev_lvol_create", 00:06:46.945 "bdev_lvol_delete_lvstore", 00:06:46.945 "bdev_lvol_rename_lvstore", 00:06:46.945 "bdev_lvol_create_lvstore", 00:06:46.945 "bdev_passthru_delete", 00:06:46.945 "bdev_passthru_create", 00:06:46.945 "bdev_nvme_cuse_unregister", 00:06:46.945 "bdev_nvme_cuse_register", 00:06:46.945 "bdev_opal_new_user", 00:06:46.945 "bdev_opal_set_lock_state", 00:06:46.945 "bdev_opal_delete", 00:06:46.945 "bdev_opal_get_info", 00:06:46.945 "bdev_opal_create", 00:06:46.945 "bdev_nvme_opal_revert", 00:06:46.945 "bdev_nvme_opal_init", 00:06:46.945 "bdev_nvme_send_cmd", 00:06:46.945 "bdev_nvme_get_path_iostat", 00:06:46.945 "bdev_nvme_get_mdns_discovery_info", 00:06:46.945 "bdev_nvme_stop_mdns_discovery", 00:06:46.945 "bdev_nvme_start_mdns_discovery", 00:06:46.945 "bdev_nvme_set_multipath_policy", 00:06:46.945 "bdev_nvme_set_preferred_path", 00:06:46.945 "bdev_nvme_get_io_paths", 00:06:46.945 "bdev_nvme_remove_error_injection", 00:06:46.945 "bdev_nvme_add_error_injection", 00:06:46.945 "bdev_nvme_get_discovery_info", 00:06:46.945 "bdev_nvme_stop_discovery", 00:06:46.945 "bdev_nvme_start_discovery", 00:06:46.945 "bdev_nvme_get_controller_health_info", 00:06:46.945 "bdev_nvme_disable_controller", 00:06:46.945 "bdev_nvme_enable_controller", 00:06:46.945 "bdev_nvme_reset_controller", 00:06:46.945 "bdev_nvme_get_transport_statistics", 00:06:46.945 "bdev_nvme_apply_firmware", 00:06:46.945 "bdev_nvme_detach_controller", 00:06:46.945 "bdev_nvme_get_controllers", 00:06:46.945 "bdev_nvme_attach_controller", 00:06:46.945 "bdev_nvme_set_hotplug", 00:06:46.945 "bdev_nvme_set_options", 00:06:46.945 "bdev_null_resize", 00:06:46.945 "bdev_null_delete", 00:06:46.945 "bdev_null_create", 00:06:46.945 "bdev_malloc_delete", 00:06:46.945 "bdev_malloc_create" 00:06:46.945 ] 00:06:46.945 13:48:37 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:46.945 13:48:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:46.945 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:46.945 13:48:37 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:46.945 13:48:37 -- spdkcli/tcp.sh@38 -- # killprocess 3890824 00:06:46.945 13:48:37 -- common/autotest_common.sh@926 -- # '[' -z 3890824 ']' 00:06:46.945 13:48:37 -- common/autotest_common.sh@930 -- # kill -0 3890824 00:06:46.945 13:48:37 -- common/autotest_common.sh@931 -- # uname 00:06:46.945 13:48:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:46.945 13:48:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3890824 00:06:47.205 13:48:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:47.205 13:48:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:47.205 13:48:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3890824' 00:06:47.205 killing process with pid 3890824 00:06:47.205 13:48:37 -- common/autotest_common.sh@945 -- # kill 3890824 00:06:47.205 13:48:37 -- common/autotest_common.sh@950 -- # wait 3890824 00:06:47.465 00:06:47.465 real 0m1.808s 00:06:47.465 user 0m3.372s 00:06:47.465 sys 0m0.593s 00:06:47.465 13:48:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.465 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:47.465 ************************************ 00:06:47.465 END TEST spdkcli_tcp 00:06:47.465 ************************************ 00:06:47.465 13:48:38 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:47.465 13:48:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.465 13:48:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.465 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:47.465 ************************************ 00:06:47.465 START TEST dpdk_mem_utility 00:06:47.465 ************************************ 00:06:47.465 13:48:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:47.724 * Looking for test storage... 00:06:47.724 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:06:47.724 13:48:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:47.724 13:48:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3891217 00:06:47.724 13:48:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3891217 00:06:47.724 13:48:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:06:47.724 13:48:38 -- common/autotest_common.sh@819 -- # '[' -z 3891217 ']' 00:06:47.724 13:48:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.724 13:48:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:47.724 13:48:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.724 13:48:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:47.724 13:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:47.724 [2024-07-23 13:48:38.528209] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:47.724 [2024-07-23 13:48:38.528311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891217 ] 00:06:47.724 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.724 [2024-07-23 13:48:38.648778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.983 [2024-07-23 13:48:38.747278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.983 [2024-07-23 13:48:38.747412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.550 13:48:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.550 13:48:39 -- common/autotest_common.sh@852 -- # return 0 00:06:48.550 13:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:48.551 13:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:48.551 13:48:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:48.551 13:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:48.551 { 00:06:48.551 "filename": "/tmp/spdk_mem_dump.txt" 00:06:48.551 } 00:06:48.551 13:48:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:48.551 13:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:48.551 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:48.551 1 heaps totaling size 814.000000 MiB 00:06:48.551 size: 814.000000 MiB heap id: 0 00:06:48.551 end heaps---------- 00:06:48.551 8 mempools totaling size 598.116089 MiB 00:06:48.551 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:48.551 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:48.551 size: 84.521057 MiB name: bdev_io_3891217 00:06:48.551 size: 51.011292 MiB name: evtpool_3891217 00:06:48.551 size: 50.003479 MiB name: msgpool_3891217 00:06:48.551 size: 21.763794 MiB name: PDU_Pool 00:06:48.551 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:48.551 size: 0.026123 MiB name: Session_Pool 00:06:48.551 end mempools------- 00:06:48.551 6 memzones totaling size 4.142822 MiB 00:06:48.551 size: 1.000366 MiB name: RG_ring_0_3891217 00:06:48.551 size: 1.000366 MiB name: RG_ring_1_3891217 00:06:48.551 size: 1.000366 MiB name: RG_ring_4_3891217 00:06:48.551 size: 1.000366 MiB name: RG_ring_5_3891217 00:06:48.551 size: 0.125366 MiB name: RG_ring_2_3891217 00:06:48.551 size: 0.015991 MiB name: RG_ring_3_3891217 00:06:48.551 end memzones------- 00:06:48.551 13:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:48.810 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:48.811 list of free elements. size: 12.519348 MiB 00:06:48.811 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:48.811 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:48.811 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:48.811 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:48.811 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:48.811 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:48.811 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:48.811 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:48.811 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:48.811 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:48.811 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:48.811 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:48.811 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:48.811 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:48.811 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:48.811 list of standard malloc elements. size: 199.218079 MiB 00:06:48.811 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:48.811 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:48.811 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:48.811 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:48.811 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:48.811 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:48.811 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:48.811 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:48.811 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:48.811 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:48.811 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:48.811 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:48.811 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:48.811 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:48.811 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:48.811 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:48.811 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:48.811 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:48.811 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:48.811 list of memzone associated elements. size: 602.262573 MiB 00:06:48.811 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:48.811 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:48.811 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:48.811 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:48.811 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:48.811 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3891217_0 00:06:48.811 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:48.811 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3891217_0 00:06:48.811 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:48.811 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3891217_0 00:06:48.811 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:48.811 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:48.811 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:48.811 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:48.811 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:48.811 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3891217 00:06:48.811 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:48.811 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3891217 00:06:48.811 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:48.811 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3891217 00:06:48.811 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:48.811 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:48.811 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:48.811 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:48.811 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:48.811 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:48.811 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:48.811 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:48.811 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:48.811 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3891217 00:06:48.811 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:48.811 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3891217 00:06:48.811 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:48.811 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3891217 00:06:48.811 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:48.811 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3891217 00:06:48.811 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:48.811 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3891217 00:06:48.811 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:48.811 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:48.811 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:48.811 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:48.811 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:48.811 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:48.811 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:48.811 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3891217 00:06:48.811 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:48.811 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:48.811 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:48.811 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:48.811 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:48.811 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3891217 00:06:48.811 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:48.811 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:48.811 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:48.811 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3891217 00:06:48.811 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:48.811 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3891217 00:06:48.811 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:48.811 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:48.811 13:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:48.811 13:48:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3891217 00:06:48.811 13:48:39 -- common/autotest_common.sh@926 -- # '[' -z 3891217 ']' 00:06:48.811 13:48:39 -- common/autotest_common.sh@930 -- # kill -0 3891217 00:06:48.811 13:48:39 -- common/autotest_common.sh@931 -- # uname 00:06:48.811 13:48:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:48.811 13:48:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3891217 00:06:48.811 13:48:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:48.811 13:48:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:48.811 13:48:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3891217' 00:06:48.811 killing process with pid 3891217 00:06:48.811 13:48:39 -- common/autotest_common.sh@945 -- # kill 3891217 00:06:48.811 13:48:39 -- common/autotest_common.sh@950 -- # wait 3891217 00:06:49.071 00:06:49.071 real 0m1.650s 00:06:49.071 user 0m1.749s 00:06:49.071 sys 0m0.544s 00:06:49.071 13:48:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.072 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:49.072 ************************************ 00:06:49.072 END TEST dpdk_mem_utility 00:06:49.072 ************************************ 00:06:49.376 13:48:40 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:49.376 13:48:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.376 13:48:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.376 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:49.376 ************************************ 00:06:49.376 START TEST event 00:06:49.376 ************************************ 00:06:49.376 13:48:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:06:49.376 * Looking for test storage... 00:06:49.376 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:06:49.376 13:48:40 -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:49.376 13:48:40 -- bdev/nbd_common.sh@6 -- # set -e 00:06:49.376 13:48:40 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:49.376 13:48:40 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:49.376 13:48:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.376 13:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:49.376 ************************************ 00:06:49.376 START TEST event_perf 00:06:49.376 ************************************ 00:06:49.376 13:48:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:49.376 Running I/O for 1 seconds...[2024-07-23 13:48:40.236601] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:49.376 [2024-07-23 13:48:40.236723] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891477 ] 00:06:49.376 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.376 [2024-07-23 13:48:40.344888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.635 [2024-07-23 13:48:40.443525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.635 [2024-07-23 13:48:40.443629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.635 [2024-07-23 13:48:40.443730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.635 [2024-07-23 13:48:40.443730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.574 Running I/O for 1 seconds... 00:06:50.574 lcore 0: 155788 00:06:50.574 lcore 1: 155787 00:06:50.574 lcore 2: 155785 00:06:50.574 lcore 3: 155786 00:06:50.574 done. 00:06:50.574 00:06:50.574 real 0m1.308s 00:06:50.574 user 0m4.170s 00:06:50.574 sys 0m0.130s 00:06:50.574 13:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.574 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:50.574 ************************************ 00:06:50.574 END TEST event_perf 00:06:50.574 ************************************ 00:06:50.574 13:48:41 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:50.574 13:48:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:50.574 13:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.574 13:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:50.574 ************************************ 00:06:50.574 START TEST event_reactor 00:06:50.574 ************************************ 00:06:50.574 13:48:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:50.574 [2024-07-23 13:48:41.593205] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:50.574 [2024-07-23 13:48:41.593310] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891680 ] 00:06:50.834 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.834 [2024-07-23 13:48:41.714495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.834 [2024-07-23 13:48:41.811824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.213 test_start 00:06:52.213 oneshot 00:06:52.213 tick 100 00:06:52.213 tick 100 00:06:52.213 tick 250 00:06:52.213 tick 100 00:06:52.213 tick 100 00:06:52.213 tick 100 00:06:52.213 tick 250 00:06:52.213 tick 500 00:06:52.213 tick 100 00:06:52.213 tick 100 00:06:52.213 tick 250 00:06:52.213 tick 100 00:06:52.213 tick 100 00:06:52.213 test_end 00:06:52.213 00:06:52.213 real 0m1.317s 00:06:52.213 user 0m1.183s 00:06:52.213 sys 0m0.127s 00:06:52.213 13:48:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.213 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:52.213 ************************************ 00:06:52.213 END TEST event_reactor 00:06:52.213 ************************************ 00:06:52.213 13:48:42 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:52.213 13:48:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:52.213 13:48:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.213 13:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:52.213 ************************************ 00:06:52.213 START TEST event_reactor_perf 00:06:52.213 ************************************ 00:06:52.213 13:48:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:52.213 [2024-07-23 13:48:42.955802] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:52.213 [2024-07-23 13:48:42.955903] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891882 ] 00:06:52.213 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.213 [2024-07-23 13:48:43.076319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.213 [2024-07-23 13:48:43.173158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.591 test_start 00:06:53.591 test_end 00:06:53.591 Performance: 564585 events per second 00:06:53.591 00:06:53.591 real 0m1.321s 00:06:53.591 user 0m1.182s 00:06:53.591 sys 0m0.132s 00:06:53.591 13:48:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.591 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.591 ************************************ 00:06:53.591 END TEST event_reactor_perf 00:06:53.591 ************************************ 00:06:53.591 13:48:44 -- event/event.sh@49 -- # uname -s 00:06:53.591 13:48:44 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:53.591 13:48:44 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:53.591 13:48:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:53.591 13:48:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.591 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.591 ************************************ 00:06:53.591 START TEST event_scheduler 00:06:53.591 ************************************ 00:06:53.591 13:48:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:53.591 * Looking for test storage... 00:06:53.591 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:06:53.591 13:48:44 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:53.591 13:48:44 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3892101 00:06:53.591 13:48:44 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.591 13:48:44 -- scheduler/scheduler.sh@37 -- # waitforlisten 3892101 00:06:53.591 13:48:44 -- common/autotest_common.sh@819 -- # '[' -z 3892101 ']' 00:06:53.591 13:48:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.591 13:48:44 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:53.591 13:48:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:53.591 13:48:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.591 13:48:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:53.591 13:48:44 -- common/autotest_common.sh@10 -- # set +x 00:06:53.591 [2024-07-23 13:48:44.442516] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:53.591 [2024-07-23 13:48:44.442617] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892101 ] 00:06:53.591 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.591 [2024-07-23 13:48:44.563139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.850 [2024-07-23 13:48:44.667625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.850 [2024-07-23 13:48:44.667647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.850 [2024-07-23 13:48:44.667745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.850 [2024-07-23 13:48:44.667744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.418 13:48:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.418 13:48:45 -- common/autotest_common.sh@852 -- # return 0 00:06:54.418 13:48:45 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:54.418 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.418 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.418 POWER: Env isn't set yet! 00:06:54.418 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:54.418 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.418 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.418 POWER: Attempting to initialise PSTAT power management... 00:06:54.418 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:54.418 POWER: Initialized successfully for lcore 0 power management 00:06:54.418 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:54.418 POWER: Initialized successfully for lcore 1 power management 00:06:54.418 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:54.418 POWER: Initialized successfully for lcore 2 power management 00:06:54.677 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:54.677 POWER: Initialized successfully for lcore 3 power management 00:06:54.677 [2024-07-23 13:48:45.442439] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:54.677 [2024-07-23 13:48:45.442456] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:54.677 [2024-07-23 13:48:45.442468] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:54.677 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.677 13:48:45 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:54.677 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 [2024-07-23 13:48:45.522204] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:54.678 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:54.678 13:48:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:54.678 13:48:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 ************************************ 00:06:54.678 START TEST scheduler_create_thread 00:06:54.678 ************************************ 00:06:54.678 13:48:45 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:54.678 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 2 00:06:54.678 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:54.678 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 3 00:06:54.678 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:54.678 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 4 00:06:54.678 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:54.678 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 5 00:06:54.678 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:54.678 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 6 00:06:54.678 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:54.678 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 7 00:06:54.678 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:54.678 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 8 00:06:54.678 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:54.678 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 9 00:06:54.678 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:54.678 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 10 00:06:54.678 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:54.678 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:54.678 13:48:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:54.678 13:48:45 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:54.678 13:48:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.678 13:48:45 -- common/autotest_common.sh@10 -- # set +x 00:06:55.616 13:48:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.616 13:48:46 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:55.616 13:48:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.616 13:48:46 -- common/autotest_common.sh@10 -- # set +x 00:06:56.997 13:48:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:56.997 13:48:47 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:56.997 13:48:47 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:56.997 13:48:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:56.997 13:48:47 -- common/autotest_common.sh@10 -- # set +x 00:06:57.934 13:48:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.934 00:06:57.934 real 0m3.382s 00:06:57.934 user 0m0.028s 00:06:57.934 sys 0m0.003s 00:06:57.934 13:48:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.934 13:48:48 -- common/autotest_common.sh@10 -- # set +x 00:06:57.934 ************************************ 00:06:57.934 END TEST scheduler_create_thread 00:06:57.934 ************************************ 00:06:58.193 13:48:48 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:58.193 13:48:48 -- scheduler/scheduler.sh@46 -- # killprocess 3892101 00:06:58.193 13:48:48 -- common/autotest_common.sh@926 -- # '[' -z 3892101 ']' 00:06:58.193 13:48:48 -- common/autotest_common.sh@930 -- # kill -0 3892101 00:06:58.193 13:48:48 -- common/autotest_common.sh@931 -- # uname 00:06:58.193 13:48:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:58.193 13:48:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3892101 00:06:58.193 13:48:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:58.193 13:48:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:58.193 13:48:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3892101' 00:06:58.193 killing process with pid 3892101 00:06:58.193 13:48:49 -- common/autotest_common.sh@945 -- # kill 3892101 00:06:58.193 13:48:49 -- common/autotest_common.sh@950 -- # wait 3892101 00:06:58.452 [2024-07-23 13:48:49.294509] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:58.452 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:58.452 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:58.452 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:58.452 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:58.452 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:58.452 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:58.452 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:58.452 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:58.711 00:06:58.711 real 0m5.228s 00:06:58.711 user 0m10.875s 00:06:58.711 sys 0m0.465s 00:06:58.711 13:48:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.711 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:58.711 ************************************ 00:06:58.711 END TEST event_scheduler 00:06:58.711 ************************************ 00:06:58.711 13:48:49 -- event/event.sh@51 -- # modprobe -n nbd 00:06:58.711 13:48:49 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:58.711 13:48:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.711 13:48:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.711 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:58.711 ************************************ 00:06:58.711 START TEST app_repeat 00:06:58.711 ************************************ 00:06:58.711 13:48:49 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:58.711 13:48:49 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.711 13:48:49 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.711 13:48:49 -- event/event.sh@13 -- # local nbd_list 00:06:58.711 13:48:49 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.711 13:48:49 -- event/event.sh@14 -- # local bdev_list 00:06:58.711 13:48:49 -- event/event.sh@15 -- # local repeat_times=4 00:06:58.711 13:48:49 -- event/event.sh@17 -- # modprobe nbd 00:06:58.711 13:48:49 -- event/event.sh@19 -- # repeat_pid=3892865 00:06:58.711 13:48:49 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.711 13:48:49 -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:58.711 13:48:49 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3892865' 00:06:58.711 Process app_repeat pid: 3892865 00:06:58.711 13:48:49 -- event/event.sh@23 -- # for i in {0..2} 00:06:58.711 13:48:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:58.711 spdk_app_start Round 0 00:06:58.711 13:48:49 -- event/event.sh@25 -- # waitforlisten 3892865 /var/tmp/spdk-nbd.sock 00:06:58.711 13:48:49 -- common/autotest_common.sh@819 -- # '[' -z 3892865 ']' 00:06:58.711 13:48:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.711 13:48:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:58.711 13:48:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.711 13:48:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:58.711 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:58.711 [2024-07-23 13:48:49.630569] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:58.711 [2024-07-23 13:48:49.630655] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892865 ] 00:06:58.711 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.970 [2024-07-23 13:48:49.738170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.970 [2024-07-23 13:48:49.837152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.970 [2024-07-23 13:48:49.837157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.907 13:48:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:59.907 13:48:50 -- common/autotest_common.sh@852 -- # return 0 00:06:59.907 13:48:50 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.907 Malloc0 00:06:59.907 13:48:50 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.166 Malloc1 00:07:00.166 13:48:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@12 -- # local i 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:00.166 /dev/nbd0 00:07:00.166 13:48:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.425 13:48:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.425 13:48:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:00.425 13:48:51 -- common/autotest_common.sh@857 -- # local i 00:07:00.425 13:48:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:00.425 13:48:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:00.425 13:48:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:00.425 13:48:51 -- common/autotest_common.sh@861 -- # break 00:07:00.425 13:48:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:00.425 13:48:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:00.425 13:48:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.425 1+0 records in 00:07:00.425 1+0 records out 00:07:00.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248911 s, 16.5 MB/s 00:07:00.425 13:48:51 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:00.425 13:48:51 -- common/autotest_common.sh@874 -- # size=4096 00:07:00.425 13:48:51 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:00.425 13:48:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:00.425 13:48:51 -- common/autotest_common.sh@877 -- # return 0 00:07:00.425 13:48:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.425 13:48:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.425 13:48:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:00.425 /dev/nbd1 00:07:00.684 13:48:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.684 13:48:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.684 13:48:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:00.684 13:48:51 -- common/autotest_common.sh@857 -- # local i 00:07:00.684 13:48:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:00.684 13:48:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:00.684 13:48:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:00.684 13:48:51 -- common/autotest_common.sh@861 -- # break 00:07:00.684 13:48:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:00.684 13:48:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:00.684 13:48:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.684 1+0 records in 00:07:00.684 1+0 records out 00:07:00.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274568 s, 14.9 MB/s 00:07:00.684 13:48:51 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:00.684 13:48:51 -- common/autotest_common.sh@874 -- # size=4096 00:07:00.684 13:48:51 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:00.684 13:48:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:00.684 13:48:51 -- common/autotest_common.sh@877 -- # return 0 00:07:00.684 13:48:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.684 13:48:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.684 13:48:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.684 13:48:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.684 13:48:51 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.944 { 00:07:00.944 "nbd_device": "/dev/nbd0", 00:07:00.944 "bdev_name": "Malloc0" 00:07:00.944 }, 00:07:00.944 { 00:07:00.944 "nbd_device": "/dev/nbd1", 00:07:00.944 "bdev_name": "Malloc1" 00:07:00.944 } 00:07:00.944 ]' 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.944 { 00:07:00.944 "nbd_device": "/dev/nbd0", 00:07:00.944 "bdev_name": "Malloc0" 00:07:00.944 }, 00:07:00.944 { 00:07:00.944 "nbd_device": "/dev/nbd1", 00:07:00.944 "bdev_name": "Malloc1" 00:07:00.944 } 00:07:00.944 ]' 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.944 /dev/nbd1' 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.944 /dev/nbd1' 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@65 -- # count=2 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@95 -- # count=2 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:00.944 256+0 records in 00:07:00.944 256+0 records out 00:07:00.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114065 s, 91.9 MB/s 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.944 256+0 records in 00:07:00.944 256+0 records out 00:07:00.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303538 s, 34.5 MB/s 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.944 256+0 records in 00:07:00.944 256+0 records out 00:07:00.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313462 s, 33.5 MB/s 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:00.944 13:48:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:00.945 13:48:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:00.945 13:48:51 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:00.945 13:48:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:00.945 13:48:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.945 13:48:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.945 13:48:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:00.945 13:48:51 -- bdev/nbd_common.sh@51 -- # local i 00:07:00.945 13:48:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.945 13:48:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.203 13:48:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.203 13:48:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.203 13:48:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.203 13:48:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.203 13:48:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.203 13:48:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.203 13:48:52 -- bdev/nbd_common.sh@41 -- # break 00:07:01.203 13:48:52 -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.204 13:48:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.204 13:48:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:01.463 13:48:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:01.463 13:48:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:01.463 13:48:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:01.463 13:48:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.463 13:48:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.463 13:48:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:01.463 13:48:52 -- bdev/nbd_common.sh@41 -- # break 00:07:01.463 13:48:52 -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.463 13:48:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.463 13:48:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.463 13:48:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@65 -- # true 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@65 -- # count=0 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@104 -- # count=0 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:01.722 13:48:52 -- bdev/nbd_common.sh@109 -- # return 0 00:07:01.722 13:48:52 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:01.980 13:48:52 -- event/event.sh@35 -- # sleep 3 00:07:02.239 [2024-07-23 13:48:53.149654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.239 [2024-07-23 13:48:53.245382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.239 [2024-07-23 13:48:53.245387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.498 [2024-07-23 13:48:53.295994] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:02.498 [2024-07-23 13:48:53.296053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:05.041 13:48:55 -- event/event.sh@23 -- # for i in {0..2} 00:07:05.041 13:48:55 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:05.041 spdk_app_start Round 1 00:07:05.041 13:48:55 -- event/event.sh@25 -- # waitforlisten 3892865 /var/tmp/spdk-nbd.sock 00:07:05.041 13:48:55 -- common/autotest_common.sh@819 -- # '[' -z 3892865 ']' 00:07:05.041 13:48:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.041 13:48:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:05.041 13:48:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.041 13:48:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:05.041 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:07:05.372 13:48:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:05.372 13:48:56 -- common/autotest_common.sh@852 -- # return 0 00:07:05.372 13:48:56 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.372 Malloc0 00:07:05.372 13:48:56 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:05.632 Malloc1 00:07:05.632 13:48:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@12 -- # local i 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.632 13:48:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:05.896 /dev/nbd0 00:07:05.896 13:48:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:05.896 13:48:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:05.896 13:48:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:05.896 13:48:56 -- common/autotest_common.sh@857 -- # local i 00:07:05.896 13:48:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:05.896 13:48:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:05.896 13:48:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:05.897 13:48:56 -- common/autotest_common.sh@861 -- # break 00:07:05.897 13:48:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:05.897 13:48:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:05.897 13:48:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.897 1+0 records in 00:07:05.897 1+0 records out 00:07:05.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278316 s, 14.7 MB/s 00:07:05.897 13:48:56 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:05.897 13:48:56 -- common/autotest_common.sh@874 -- # size=4096 00:07:05.897 13:48:56 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:05.897 13:48:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:05.897 13:48:56 -- common/autotest_common.sh@877 -- # return 0 00:07:05.897 13:48:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.897 13:48:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.897 13:48:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:06.156 /dev/nbd1 00:07:06.156 13:48:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.156 13:48:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.156 13:48:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:06.156 13:48:57 -- common/autotest_common.sh@857 -- # local i 00:07:06.156 13:48:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:06.156 13:48:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:06.156 13:48:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:06.156 13:48:57 -- common/autotest_common.sh@861 -- # break 00:07:06.156 13:48:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:06.157 13:48:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:06.157 13:48:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.157 1+0 records in 00:07:06.157 1+0 records out 00:07:06.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258738 s, 15.8 MB/s 00:07:06.157 13:48:57 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:06.157 13:48:57 -- common/autotest_common.sh@874 -- # size=4096 00:07:06.157 13:48:57 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:06.157 13:48:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:06.157 13:48:57 -- common/autotest_common.sh@877 -- # return 0 00:07:06.157 13:48:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.157 13:48:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.157 13:48:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.157 13:48:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.157 13:48:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:06.417 { 00:07:06.417 "nbd_device": "/dev/nbd0", 00:07:06.417 "bdev_name": "Malloc0" 00:07:06.417 }, 00:07:06.417 { 00:07:06.417 "nbd_device": "/dev/nbd1", 00:07:06.417 "bdev_name": "Malloc1" 00:07:06.417 } 00:07:06.417 ]' 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:06.417 { 00:07:06.417 "nbd_device": "/dev/nbd0", 00:07:06.417 "bdev_name": "Malloc0" 00:07:06.417 }, 00:07:06.417 { 00:07:06.417 "nbd_device": "/dev/nbd1", 00:07:06.417 "bdev_name": "Malloc1" 00:07:06.417 } 00:07:06.417 ]' 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.417 /dev/nbd1' 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.417 /dev/nbd1' 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@65 -- # count=2 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@95 -- # count=2 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.417 13:48:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:06.677 256+0 records in 00:07:06.677 256+0 records out 00:07:06.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114465 s, 91.6 MB/s 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.677 256+0 records in 00:07:06.677 256+0 records out 00:07:06.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300249 s, 34.9 MB/s 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.677 256+0 records in 00:07:06.677 256+0 records out 00:07:06.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0319872 s, 32.8 MB/s 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@51 -- # local i 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.677 13:48:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@41 -- # break 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@41 -- # break 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.937 13:48:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.197 13:48:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.197 13:48:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.197 13:48:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.456 13:48:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.456 13:48:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.456 13:48:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.456 13:48:58 -- bdev/nbd_common.sh@65 -- # true 00:07:07.457 13:48:58 -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.457 13:48:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.457 13:48:58 -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.457 13:48:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.457 13:48:58 -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.457 13:48:58 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:07.717 13:48:58 -- event/event.sh@35 -- # sleep 3 00:07:07.717 [2024-07-23 13:48:58.737766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.977 [2024-07-23 13:48:58.837722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.977 [2024-07-23 13:48:58.837726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.977 [2024-07-23 13:48:58.888148] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.977 [2024-07-23 13:48:58.888205] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.518 13:49:01 -- event/event.sh@23 -- # for i in {0..2} 00:07:10.518 13:49:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:10.518 spdk_app_start Round 2 00:07:10.518 13:49:01 -- event/event.sh@25 -- # waitforlisten 3892865 /var/tmp/spdk-nbd.sock 00:07:10.518 13:49:01 -- common/autotest_common.sh@819 -- # '[' -z 3892865 ']' 00:07:10.518 13:49:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.518 13:49:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:10.518 13:49:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.518 13:49:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:10.518 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:07:10.777 13:49:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:10.777 13:49:01 -- common/autotest_common.sh@852 -- # return 0 00:07:10.777 13:49:01 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.037 Malloc0 00:07:11.037 13:49:01 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.296 Malloc1 00:07:11.296 13:49:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@12 -- # local i 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.296 13:49:02 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.556 /dev/nbd0 00:07:11.556 13:49:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.556 13:49:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.556 13:49:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:11.556 13:49:02 -- common/autotest_common.sh@857 -- # local i 00:07:11.556 13:49:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:11.556 13:49:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:11.556 13:49:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:11.556 13:49:02 -- common/autotest_common.sh@861 -- # break 00:07:11.556 13:49:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:11.556 13:49:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:11.556 13:49:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.556 1+0 records in 00:07:11.556 1+0 records out 00:07:11.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251689 s, 16.3 MB/s 00:07:11.556 13:49:02 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:11.556 13:49:02 -- common/autotest_common.sh@874 -- # size=4096 00:07:11.556 13:49:02 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:11.556 13:49:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:11.556 13:49:02 -- common/autotest_common.sh@877 -- # return 0 00:07:11.556 13:49:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.556 13:49:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.556 13:49:02 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:11.816 /dev/nbd1 00:07:11.816 13:49:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.816 13:49:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.816 13:49:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:11.816 13:49:02 -- common/autotest_common.sh@857 -- # local i 00:07:11.816 13:49:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:11.816 13:49:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:11.816 13:49:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:11.816 13:49:02 -- common/autotest_common.sh@861 -- # break 00:07:11.816 13:49:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:11.816 13:49:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:11.816 13:49:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.816 1+0 records in 00:07:11.816 1+0 records out 00:07:11.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283585 s, 14.4 MB/s 00:07:11.816 13:49:02 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:11.816 13:49:02 -- common/autotest_common.sh@874 -- # size=4096 00:07:11.816 13:49:02 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:07:11.816 13:49:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:11.816 13:49:02 -- common/autotest_common.sh@877 -- # return 0 00:07:11.816 13:49:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.816 13:49:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.816 13:49:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.816 13:49:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.816 13:49:02 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.076 { 00:07:12.076 "nbd_device": "/dev/nbd0", 00:07:12.076 "bdev_name": "Malloc0" 00:07:12.076 }, 00:07:12.076 { 00:07:12.076 "nbd_device": "/dev/nbd1", 00:07:12.076 "bdev_name": "Malloc1" 00:07:12.076 } 00:07:12.076 ]' 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.076 { 00:07:12.076 "nbd_device": "/dev/nbd0", 00:07:12.076 "bdev_name": "Malloc0" 00:07:12.076 }, 00:07:12.076 { 00:07:12.076 "nbd_device": "/dev/nbd1", 00:07:12.076 "bdev_name": "Malloc1" 00:07:12.076 } 00:07:12.076 ]' 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.076 /dev/nbd1' 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.076 /dev/nbd1' 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.076 256+0 records in 00:07:12.076 256+0 records out 00:07:12.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01051 s, 99.8 MB/s 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.076 256+0 records in 00:07:12.076 256+0 records out 00:07:12.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245069 s, 42.8 MB/s 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.076 13:49:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.076 256+0 records in 00:07:12.076 256+0 records out 00:07:12.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219995 s, 47.7 MB/s 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@51 -- # local i 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.077 13:49:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.336 13:49:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.336 13:49:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.336 13:49:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.336 13:49:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.336 13:49:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.336 13:49:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.336 13:49:03 -- bdev/nbd_common.sh@41 -- # break 00:07:12.336 13:49:03 -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.336 13:49:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.336 13:49:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.595 13:49:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.595 13:49:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.595 13:49:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.595 13:49:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.595 13:49:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.595 13:49:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.595 13:49:03 -- bdev/nbd_common.sh@41 -- # break 00:07:12.595 13:49:03 -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.595 13:49:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.595 13:49:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.595 13:49:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@65 -- # true 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.855 13:49:03 -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.855 13:49:03 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.114 13:49:04 -- event/event.sh@35 -- # sleep 3 00:07:13.372 [2024-07-23 13:49:04.343018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.631 [2024-07-23 13:49:04.435883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.631 [2024-07-23 13:49:04.435888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.631 [2024-07-23 13:49:04.485782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:13.631 [2024-07-23 13:49:04.485839] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.172 13:49:07 -- event/event.sh@38 -- # waitforlisten 3892865 /var/tmp/spdk-nbd.sock 00:07:16.172 13:49:07 -- common/autotest_common.sh@819 -- # '[' -z 3892865 ']' 00:07:16.172 13:49:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.172 13:49:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:16.172 13:49:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.172 13:49:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:16.172 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.432 13:49:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:16.432 13:49:07 -- common/autotest_common.sh@852 -- # return 0 00:07:16.432 13:49:07 -- event/event.sh@39 -- # killprocess 3892865 00:07:16.432 13:49:07 -- common/autotest_common.sh@926 -- # '[' -z 3892865 ']' 00:07:16.432 13:49:07 -- common/autotest_common.sh@930 -- # kill -0 3892865 00:07:16.432 13:49:07 -- common/autotest_common.sh@931 -- # uname 00:07:16.432 13:49:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:16.432 13:49:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3892865 00:07:16.432 13:49:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:16.432 13:49:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:16.432 13:49:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3892865' 00:07:16.432 killing process with pid 3892865 00:07:16.432 13:49:07 -- common/autotest_common.sh@945 -- # kill 3892865 00:07:16.432 13:49:07 -- common/autotest_common.sh@950 -- # wait 3892865 00:07:16.692 spdk_app_start is called in Round 0. 00:07:16.692 Shutdown signal received, stop current app iteration 00:07:16.692 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:07:16.692 spdk_app_start is called in Round 1. 00:07:16.692 Shutdown signal received, stop current app iteration 00:07:16.692 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:07:16.692 spdk_app_start is called in Round 2. 00:07:16.692 Shutdown signal received, stop current app iteration 00:07:16.692 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:07:16.692 spdk_app_start is called in Round 3. 00:07:16.692 Shutdown signal received, stop current app iteration 00:07:16.692 13:49:07 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:16.692 13:49:07 -- event/event.sh@42 -- # return 0 00:07:16.692 00:07:16.692 real 0m17.988s 00:07:16.692 user 0m38.400s 00:07:16.692 sys 0m3.909s 00:07:16.692 13:49:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.692 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.692 ************************************ 00:07:16.692 END TEST app_repeat 00:07:16.692 ************************************ 00:07:16.692 13:49:07 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:16.692 13:49:07 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:16.692 13:49:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.692 13:49:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.692 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.692 ************************************ 00:07:16.692 START TEST cpu_locks 00:07:16.692 ************************************ 00:07:16.692 13:49:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:16.952 * Looking for test storage... 00:07:16.952 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:07:16.952 13:49:07 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:16.952 13:49:07 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:16.952 13:49:07 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:16.952 13:49:07 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:16.952 13:49:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.952 13:49:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.952 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.952 ************************************ 00:07:16.953 START TEST default_locks 00:07:16.953 ************************************ 00:07:16.953 13:49:07 -- common/autotest_common.sh@1104 -- # default_locks 00:07:16.953 13:49:07 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3895428 00:07:16.953 13:49:07 -- event/cpu_locks.sh@47 -- # waitforlisten 3895428 00:07:16.953 13:49:07 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.953 13:49:07 -- common/autotest_common.sh@819 -- # '[' -z 3895428 ']' 00:07:16.953 13:49:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.953 13:49:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:16.953 13:49:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.953 13:49:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:16.953 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.953 [2024-07-23 13:49:07.781141] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:16.953 [2024-07-23 13:49:07.781226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895428 ] 00:07:16.953 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.953 [2024-07-23 13:49:07.894121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.213 [2024-07-23 13:49:07.992371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:17.213 [2024-07-23 13:49:07.992514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.782 13:49:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:17.783 13:49:08 -- common/autotest_common.sh@852 -- # return 0 00:07:17.783 13:49:08 -- event/cpu_locks.sh@49 -- # locks_exist 3895428 00:07:17.783 13:49:08 -- event/cpu_locks.sh@22 -- # lslocks -p 3895428 00:07:17.783 13:49:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.352 lslocks: write error 00:07:18.352 13:49:09 -- event/cpu_locks.sh@50 -- # killprocess 3895428 00:07:18.352 13:49:09 -- common/autotest_common.sh@926 -- # '[' -z 3895428 ']' 00:07:18.352 13:49:09 -- common/autotest_common.sh@930 -- # kill -0 3895428 00:07:18.352 13:49:09 -- common/autotest_common.sh@931 -- # uname 00:07:18.352 13:49:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:18.352 13:49:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3895428 00:07:18.612 13:49:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:18.612 13:49:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:18.612 13:49:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3895428' 00:07:18.612 killing process with pid 3895428 00:07:18.612 13:49:09 -- common/autotest_common.sh@945 -- # kill 3895428 00:07:18.612 13:49:09 -- common/autotest_common.sh@950 -- # wait 3895428 00:07:18.873 13:49:09 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3895428 00:07:18.873 13:49:09 -- common/autotest_common.sh@640 -- # local es=0 00:07:18.873 13:49:09 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3895428 00:07:18.873 13:49:09 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:18.873 13:49:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:18.873 13:49:09 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:18.873 13:49:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:18.873 13:49:09 -- common/autotest_common.sh@643 -- # waitforlisten 3895428 00:07:18.873 13:49:09 -- common/autotest_common.sh@819 -- # '[' -z 3895428 ']' 00:07:18.873 13:49:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.873 13:49:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.873 13:49:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.873 13:49:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.873 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.873 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3895428) - No such process 00:07:18.873 ERROR: process (pid: 3895428) is no longer running 00:07:18.873 13:49:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.873 13:49:09 -- common/autotest_common.sh@852 -- # return 1 00:07:18.873 13:49:09 -- common/autotest_common.sh@643 -- # es=1 00:07:18.873 13:49:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:18.873 13:49:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:18.873 13:49:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:18.873 13:49:09 -- event/cpu_locks.sh@54 -- # no_locks 00:07:18.873 13:49:09 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:18.873 13:49:09 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:18.873 13:49:09 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:18.873 00:07:18.873 real 0m2.003s 00:07:18.873 user 0m2.115s 00:07:18.873 sys 0m0.770s 00:07:18.873 13:49:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.873 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.873 ************************************ 00:07:18.873 END TEST default_locks 00:07:18.873 ************************************ 00:07:18.873 13:49:09 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:18.873 13:49:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.873 13:49:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.873 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.873 ************************************ 00:07:18.873 START TEST default_locks_via_rpc 00:07:18.873 ************************************ 00:07:18.873 13:49:09 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:07:18.873 13:49:09 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3895810 00:07:18.873 13:49:09 -- event/cpu_locks.sh@63 -- # waitforlisten 3895810 00:07:18.873 13:49:09 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.873 13:49:09 -- common/autotest_common.sh@819 -- # '[' -z 3895810 ']' 00:07:18.873 13:49:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.873 13:49:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:18.873 13:49:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.873 13:49:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:18.873 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.873 [2024-07-23 13:49:09.837943] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:18.873 [2024-07-23 13:49:09.838020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895810 ] 00:07:18.873 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.133 [2024-07-23 13:49:09.961049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.133 [2024-07-23 13:49:10.066153] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:19.133 [2024-07-23 13:49:10.066310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.071 13:49:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:20.071 13:49:10 -- common/autotest_common.sh@852 -- # return 0 00:07:20.071 13:49:10 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:20.071 13:49:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.071 13:49:10 -- common/autotest_common.sh@10 -- # set +x 00:07:20.071 13:49:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.071 13:49:10 -- event/cpu_locks.sh@67 -- # no_locks 00:07:20.071 13:49:10 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:20.071 13:49:10 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:20.071 13:49:10 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:20.071 13:49:10 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:20.071 13:49:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:20.071 13:49:10 -- common/autotest_common.sh@10 -- # set +x 00:07:20.071 13:49:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:20.071 13:49:10 -- event/cpu_locks.sh@71 -- # locks_exist 3895810 00:07:20.071 13:49:10 -- event/cpu_locks.sh@22 -- # lslocks -p 3895810 00:07:20.071 13:49:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.331 13:49:11 -- event/cpu_locks.sh@73 -- # killprocess 3895810 00:07:20.331 13:49:11 -- common/autotest_common.sh@926 -- # '[' -z 3895810 ']' 00:07:20.331 13:49:11 -- common/autotest_common.sh@930 -- # kill -0 3895810 00:07:20.331 13:49:11 -- common/autotest_common.sh@931 -- # uname 00:07:20.331 13:49:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:20.331 13:49:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3895810 00:07:20.591 13:49:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:20.591 13:49:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:20.591 13:49:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3895810' 00:07:20.591 killing process with pid 3895810 00:07:20.591 13:49:11 -- common/autotest_common.sh@945 -- # kill 3895810 00:07:20.591 13:49:11 -- common/autotest_common.sh@950 -- # wait 3895810 00:07:20.851 00:07:20.851 real 0m1.923s 00:07:20.851 user 0m2.029s 00:07:20.851 sys 0m0.701s 00:07:20.851 13:49:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.851 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:07:20.851 ************************************ 00:07:20.851 END TEST default_locks_via_rpc 00:07:20.851 ************************************ 00:07:20.851 13:49:11 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:20.851 13:49:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:20.851 13:49:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.851 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:07:20.851 ************************************ 00:07:20.851 START TEST non_locking_app_on_locked_coremask 00:07:20.851 ************************************ 00:07:20.851 13:49:11 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:07:20.851 13:49:11 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3896027 00:07:20.851 13:49:11 -- event/cpu_locks.sh@81 -- # waitforlisten 3896027 /var/tmp/spdk.sock 00:07:20.851 13:49:11 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.851 13:49:11 -- common/autotest_common.sh@819 -- # '[' -z 3896027 ']' 00:07:20.851 13:49:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.851 13:49:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:20.851 13:49:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.851 13:49:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:20.851 13:49:11 -- common/autotest_common.sh@10 -- # set +x 00:07:20.851 [2024-07-23 13:49:11.807105] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:20.851 [2024-07-23 13:49:11.807193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896027 ] 00:07:20.851 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.111 [2024-07-23 13:49:11.918175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.111 [2024-07-23 13:49:12.018137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:21.111 [2024-07-23 13:49:12.018292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.077 13:49:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:22.077 13:49:12 -- common/autotest_common.sh@852 -- # return 0 00:07:22.077 13:49:12 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3896203 00:07:22.077 13:49:12 -- event/cpu_locks.sh@85 -- # waitforlisten 3896203 /var/tmp/spdk2.sock 00:07:22.077 13:49:12 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:22.077 13:49:12 -- common/autotest_common.sh@819 -- # '[' -z 3896203 ']' 00:07:22.077 13:49:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.077 13:49:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:22.077 13:49:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.077 13:49:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:22.077 13:49:12 -- common/autotest_common.sh@10 -- # set +x 00:07:22.077 [2024-07-23 13:49:12.743088] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:22.077 [2024-07-23 13:49:12.743168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896203 ] 00:07:22.077 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.077 [2024-07-23 13:49:12.902752] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.077 [2024-07-23 13:49:12.902786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.077 [2024-07-23 13:49:13.095927] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.077 [2024-07-23 13:49:13.096069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.645 13:49:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:22.645 13:49:13 -- common/autotest_common.sh@852 -- # return 0 00:07:22.645 13:49:13 -- event/cpu_locks.sh@87 -- # locks_exist 3896027 00:07:22.645 13:49:13 -- event/cpu_locks.sh@22 -- # lslocks -p 3896027 00:07:22.645 13:49:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.025 lslocks: write error 00:07:24.025 13:49:14 -- event/cpu_locks.sh@89 -- # killprocess 3896027 00:07:24.025 13:49:14 -- common/autotest_common.sh@926 -- # '[' -z 3896027 ']' 00:07:24.025 13:49:14 -- common/autotest_common.sh@930 -- # kill -0 3896027 00:07:24.025 13:49:14 -- common/autotest_common.sh@931 -- # uname 00:07:24.025 13:49:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:24.025 13:49:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3896027 00:07:24.025 13:49:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:24.025 13:49:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:24.025 13:49:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3896027' 00:07:24.025 killing process with pid 3896027 00:07:24.025 13:49:14 -- common/autotest_common.sh@945 -- # kill 3896027 00:07:24.025 13:49:14 -- common/autotest_common.sh@950 -- # wait 3896027 00:07:24.595 13:49:15 -- event/cpu_locks.sh@90 -- # killprocess 3896203 00:07:24.595 13:49:15 -- common/autotest_common.sh@926 -- # '[' -z 3896203 ']' 00:07:24.595 13:49:15 -- common/autotest_common.sh@930 -- # kill -0 3896203 00:07:24.595 13:49:15 -- common/autotest_common.sh@931 -- # uname 00:07:24.595 13:49:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:24.595 13:49:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3896203 00:07:24.595 13:49:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:24.595 13:49:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:24.595 13:49:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3896203' 00:07:24.595 killing process with pid 3896203 00:07:24.595 13:49:15 -- common/autotest_common.sh@945 -- # kill 3896203 00:07:24.595 13:49:15 -- common/autotest_common.sh@950 -- # wait 3896203 00:07:24.854 00:07:24.854 real 0m4.089s 00:07:24.854 user 0m4.332s 00:07:24.854 sys 0m1.406s 00:07:24.854 13:49:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.854 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:07:24.854 ************************************ 00:07:24.854 END TEST non_locking_app_on_locked_coremask 00:07:24.854 ************************************ 00:07:25.115 13:49:15 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:25.115 13:49:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:25.115 13:49:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.115 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:07:25.115 ************************************ 00:07:25.115 START TEST locking_app_on_unlocked_coremask 00:07:25.115 ************************************ 00:07:25.115 13:49:15 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:07:25.115 13:49:15 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3896610 00:07:25.115 13:49:15 -- event/cpu_locks.sh@99 -- # waitforlisten 3896610 /var/tmp/spdk.sock 00:07:25.115 13:49:15 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:25.115 13:49:15 -- common/autotest_common.sh@819 -- # '[' -z 3896610 ']' 00:07:25.115 13:49:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.115 13:49:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:25.115 13:49:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.115 13:49:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:25.115 13:49:15 -- common/autotest_common.sh@10 -- # set +x 00:07:25.115 [2024-07-23 13:49:15.945796] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:25.115 [2024-07-23 13:49:15.945872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896610 ] 00:07:25.115 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.115 [2024-07-23 13:49:16.066360] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.115 [2024-07-23 13:49:16.066403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.374 [2024-07-23 13:49:16.167260] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:25.374 [2024-07-23 13:49:16.167412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.943 13:49:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:25.943 13:49:16 -- common/autotest_common.sh@852 -- # return 0 00:07:25.943 13:49:16 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3896779 00:07:25.943 13:49:16 -- event/cpu_locks.sh@103 -- # waitforlisten 3896779 /var/tmp/spdk2.sock 00:07:25.943 13:49:16 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:25.943 13:49:16 -- common/autotest_common.sh@819 -- # '[' -z 3896779 ']' 00:07:25.943 13:49:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.943 13:49:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:25.943 13:49:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.943 13:49:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:25.943 13:49:16 -- common/autotest_common.sh@10 -- # set +x 00:07:25.943 [2024-07-23 13:49:16.842235] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:25.943 [2024-07-23 13:49:16.842307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896779 ] 00:07:25.943 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.202 [2024-07-23 13:49:17.002150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.202 [2024-07-23 13:49:17.197045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:26.202 [2024-07-23 13:49:17.197193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.140 13:49:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:27.140 13:49:17 -- common/autotest_common.sh@852 -- # return 0 00:07:27.140 13:49:17 -- event/cpu_locks.sh@105 -- # locks_exist 3896779 00:07:27.140 13:49:17 -- event/cpu_locks.sh@22 -- # lslocks -p 3896779 00:07:27.140 13:49:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.079 lslocks: write error 00:07:28.079 13:49:19 -- event/cpu_locks.sh@107 -- # killprocess 3896610 00:07:28.079 13:49:19 -- common/autotest_common.sh@926 -- # '[' -z 3896610 ']' 00:07:28.079 13:49:19 -- common/autotest_common.sh@930 -- # kill -0 3896610 00:07:28.079 13:49:19 -- common/autotest_common.sh@931 -- # uname 00:07:28.079 13:49:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:28.079 13:49:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3896610 00:07:28.079 13:49:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:28.079 13:49:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:28.079 13:49:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3896610' 00:07:28.079 killing process with pid 3896610 00:07:28.079 13:49:19 -- common/autotest_common.sh@945 -- # kill 3896610 00:07:28.079 13:49:19 -- common/autotest_common.sh@950 -- # wait 3896610 00:07:29.016 13:49:19 -- event/cpu_locks.sh@108 -- # killprocess 3896779 00:07:29.016 13:49:19 -- common/autotest_common.sh@926 -- # '[' -z 3896779 ']' 00:07:29.016 13:49:19 -- common/autotest_common.sh@930 -- # kill -0 3896779 00:07:29.016 13:49:19 -- common/autotest_common.sh@931 -- # uname 00:07:29.016 13:49:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:29.016 13:49:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3896779 00:07:29.016 13:49:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:29.016 13:49:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:29.016 13:49:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3896779' 00:07:29.016 killing process with pid 3896779 00:07:29.016 13:49:19 -- common/autotest_common.sh@945 -- # kill 3896779 00:07:29.016 13:49:19 -- common/autotest_common.sh@950 -- # wait 3896779 00:07:29.275 00:07:29.276 real 0m4.267s 00:07:29.276 user 0m4.533s 00:07:29.276 sys 0m1.509s 00:07:29.276 13:49:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.276 13:49:20 -- common/autotest_common.sh@10 -- # set +x 00:07:29.276 ************************************ 00:07:29.276 END TEST locking_app_on_unlocked_coremask 00:07:29.276 ************************************ 00:07:29.276 13:49:20 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:29.276 13:49:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:29.276 13:49:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.276 13:49:20 -- common/autotest_common.sh@10 -- # set +x 00:07:29.276 ************************************ 00:07:29.276 START TEST locking_app_on_locked_coremask 00:07:29.276 ************************************ 00:07:29.276 13:49:20 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:07:29.276 13:49:20 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3897282 00:07:29.276 13:49:20 -- event/cpu_locks.sh@116 -- # waitforlisten 3897282 /var/tmp/spdk.sock 00:07:29.276 13:49:20 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:29.276 13:49:20 -- common/autotest_common.sh@819 -- # '[' -z 3897282 ']' 00:07:29.276 13:49:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.276 13:49:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:29.276 13:49:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.276 13:49:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:29.276 13:49:20 -- common/autotest_common.sh@10 -- # set +x 00:07:29.276 [2024-07-23 13:49:20.266925] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:29.276 [2024-07-23 13:49:20.267019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897282 ] 00:07:29.534 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.534 [2024-07-23 13:49:20.374893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.534 [2024-07-23 13:49:20.470326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:29.534 [2024-07-23 13:49:20.470478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.470 13:49:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:30.470 13:49:21 -- common/autotest_common.sh@852 -- # return 0 00:07:30.470 13:49:21 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3897359 00:07:30.470 13:49:21 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3897359 /var/tmp/spdk2.sock 00:07:30.470 13:49:21 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:30.470 13:49:21 -- common/autotest_common.sh@640 -- # local es=0 00:07:30.470 13:49:21 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3897359 /var/tmp/spdk2.sock 00:07:30.470 13:49:21 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:30.470 13:49:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:30.470 13:49:21 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:30.470 13:49:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:30.470 13:49:21 -- common/autotest_common.sh@643 -- # waitforlisten 3897359 /var/tmp/spdk2.sock 00:07:30.470 13:49:21 -- common/autotest_common.sh@819 -- # '[' -z 3897359 ']' 00:07:30.470 13:49:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:30.470 13:49:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:30.470 13:49:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:30.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:30.470 13:49:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:30.470 13:49:21 -- common/autotest_common.sh@10 -- # set +x 00:07:30.470 [2024-07-23 13:49:21.246577] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:30.470 [2024-07-23 13:49:21.246666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897359 ] 00:07:30.470 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.470 [2024-07-23 13:49:21.388895] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3897282 has claimed it. 00:07:30.470 [2024-07-23 13:49:21.388952] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:31.039 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3897359) - No such process 00:07:31.039 ERROR: process (pid: 3897359) is no longer running 00:07:31.039 13:49:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:31.039 13:49:21 -- common/autotest_common.sh@852 -- # return 1 00:07:31.039 13:49:21 -- common/autotest_common.sh@643 -- # es=1 00:07:31.039 13:49:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:31.039 13:49:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:31.039 13:49:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:31.039 13:49:21 -- event/cpu_locks.sh@122 -- # locks_exist 3897282 00:07:31.039 13:49:21 -- event/cpu_locks.sh@22 -- # lslocks -p 3897282 00:07:31.039 13:49:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.607 lslocks: write error 00:07:31.607 13:49:22 -- event/cpu_locks.sh@124 -- # killprocess 3897282 00:07:31.607 13:49:22 -- common/autotest_common.sh@926 -- # '[' -z 3897282 ']' 00:07:31.607 13:49:22 -- common/autotest_common.sh@930 -- # kill -0 3897282 00:07:31.607 13:49:22 -- common/autotest_common.sh@931 -- # uname 00:07:31.607 13:49:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:31.607 13:49:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3897282 00:07:31.607 13:49:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:31.607 13:49:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:31.607 13:49:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3897282' 00:07:31.607 killing process with pid 3897282 00:07:31.607 13:49:22 -- common/autotest_common.sh@945 -- # kill 3897282 00:07:31.607 13:49:22 -- common/autotest_common.sh@950 -- # wait 3897282 00:07:32.176 00:07:32.176 real 0m2.695s 00:07:32.176 user 0m2.949s 00:07:32.176 sys 0m0.848s 00:07:32.176 13:49:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.176 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:07:32.176 ************************************ 00:07:32.176 END TEST locking_app_on_locked_coremask 00:07:32.176 ************************************ 00:07:32.176 13:49:22 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:32.176 13:49:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:32.176 13:49:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.176 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:07:32.176 ************************************ 00:07:32.176 START TEST locking_overlapped_coremask 00:07:32.176 ************************************ 00:07:32.176 13:49:22 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:07:32.176 13:49:22 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3897694 00:07:32.176 13:49:22 -- event/cpu_locks.sh@133 -- # waitforlisten 3897694 /var/tmp/spdk.sock 00:07:32.176 13:49:22 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:32.176 13:49:22 -- common/autotest_common.sh@819 -- # '[' -z 3897694 ']' 00:07:32.176 13:49:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.176 13:49:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:32.176 13:49:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.176 13:49:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:32.176 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:07:32.176 [2024-07-23 13:49:23.008569] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:32.176 [2024-07-23 13:49:23.008644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897694 ] 00:07:32.176 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.176 [2024-07-23 13:49:23.128984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.436 [2024-07-23 13:49:23.235179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:32.436 [2024-07-23 13:49:23.235369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.436 [2024-07-23 13:49:23.235455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.436 [2024-07-23 13:49:23.235460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.003 13:49:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:33.003 13:49:23 -- common/autotest_common.sh@852 -- # return 0 00:07:33.003 13:49:23 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:33.003 13:49:23 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3897756 00:07:33.003 13:49:23 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3897756 /var/tmp/spdk2.sock 00:07:33.004 13:49:23 -- common/autotest_common.sh@640 -- # local es=0 00:07:33.004 13:49:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3897756 /var/tmp/spdk2.sock 00:07:33.004 13:49:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:33.004 13:49:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:33.004 13:49:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:33.004 13:49:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:33.004 13:49:23 -- common/autotest_common.sh@643 -- # waitforlisten 3897756 /var/tmp/spdk2.sock 00:07:33.004 13:49:23 -- common/autotest_common.sh@819 -- # '[' -z 3897756 ']' 00:07:33.004 13:49:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.004 13:49:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:33.004 13:49:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.004 13:49:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:33.004 13:49:23 -- common/autotest_common.sh@10 -- # set +x 00:07:33.004 [2024-07-23 13:49:24.000576] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:33.004 [2024-07-23 13:49:24.000666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897756 ] 00:07:33.262 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.262 [2024-07-23 13:49:24.134646] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3897694 has claimed it. 00:07:33.262 [2024-07-23 13:49:24.134684] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:33.830 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3897756) - No such process 00:07:33.830 ERROR: process (pid: 3897756) is no longer running 00:07:33.830 13:49:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:33.830 13:49:24 -- common/autotest_common.sh@852 -- # return 1 00:07:33.830 13:49:24 -- common/autotest_common.sh@643 -- # es=1 00:07:33.830 13:49:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:33.830 13:49:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:33.830 13:49:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:33.830 13:49:24 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:33.830 13:49:24 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:33.830 13:49:24 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:33.830 13:49:24 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:33.830 13:49:24 -- event/cpu_locks.sh@141 -- # killprocess 3897694 00:07:33.830 13:49:24 -- common/autotest_common.sh@926 -- # '[' -z 3897694 ']' 00:07:33.830 13:49:24 -- common/autotest_common.sh@930 -- # kill -0 3897694 00:07:33.830 13:49:24 -- common/autotest_common.sh@931 -- # uname 00:07:33.830 13:49:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:33.830 13:49:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3897694 00:07:33.830 13:49:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:33.830 13:49:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:33.830 13:49:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3897694' 00:07:33.830 killing process with pid 3897694 00:07:33.830 13:49:24 -- common/autotest_common.sh@945 -- # kill 3897694 00:07:33.830 13:49:24 -- common/autotest_common.sh@950 -- # wait 3897694 00:07:34.399 00:07:34.399 real 0m2.145s 00:07:34.399 user 0m5.970s 00:07:34.399 sys 0m0.580s 00:07:34.399 13:49:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.399 13:49:25 -- common/autotest_common.sh@10 -- # set +x 00:07:34.399 ************************************ 00:07:34.399 END TEST locking_overlapped_coremask 00:07:34.399 ************************************ 00:07:34.399 13:49:25 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:34.399 13:49:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:34.399 13:49:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.399 13:49:25 -- common/autotest_common.sh@10 -- # set +x 00:07:34.399 ************************************ 00:07:34.399 START TEST locking_overlapped_coremask_via_rpc 00:07:34.399 ************************************ 00:07:34.399 13:49:25 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:07:34.399 13:49:25 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3897962 00:07:34.399 13:49:25 -- event/cpu_locks.sh@149 -- # waitforlisten 3897962 /var/tmp/spdk.sock 00:07:34.399 13:49:25 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:34.399 13:49:25 -- common/autotest_common.sh@819 -- # '[' -z 3897962 ']' 00:07:34.399 13:49:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.399 13:49:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:34.399 13:49:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.399 13:49:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:34.399 13:49:25 -- common/autotest_common.sh@10 -- # set +x 00:07:34.399 [2024-07-23 13:49:25.204348] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:34.399 [2024-07-23 13:49:25.204448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897962 ] 00:07:34.399 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.399 [2024-07-23 13:49:25.323483] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:34.399 [2024-07-23 13:49:25.323521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.659 [2024-07-23 13:49:25.429255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:34.659 [2024-07-23 13:49:25.429436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.659 [2024-07-23 13:49:25.429521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.659 [2024-07-23 13:49:25.429525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.228 13:49:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:35.228 13:49:26 -- common/autotest_common.sh@852 -- # return 0 00:07:35.228 13:49:26 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3898144 00:07:35.228 13:49:26 -- event/cpu_locks.sh@153 -- # waitforlisten 3898144 /var/tmp/spdk2.sock 00:07:35.228 13:49:26 -- common/autotest_common.sh@819 -- # '[' -z 3898144 ']' 00:07:35.228 13:49:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.228 13:49:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:35.228 13:49:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.228 13:49:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:35.228 13:49:26 -- common/autotest_common.sh@10 -- # set +x 00:07:35.228 13:49:26 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:35.228 [2024-07-23 13:49:26.189979] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:35.228 [2024-07-23 13:49:26.190066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898144 ] 00:07:35.228 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.487 [2024-07-23 13:49:26.320068] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:35.487 [2024-07-23 13:49:26.320095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.487 [2024-07-23 13:49:26.485376] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.487 [2024-07-23 13:49:26.485558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.487 [2024-07-23 13:49:26.485671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.487 [2024-07-23 13:49:26.485672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:36.055 13:49:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.055 13:49:27 -- common/autotest_common.sh@852 -- # return 0 00:07:36.055 13:49:27 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:36.055 13:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.055 13:49:27 -- common/autotest_common.sh@10 -- # set +x 00:07:36.055 13:49:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.055 13:49:27 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.055 13:49:27 -- common/autotest_common.sh@640 -- # local es=0 00:07:36.055 13:49:27 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.055 13:49:27 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:07:36.314 13:49:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:36.314 13:49:27 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:07:36.314 13:49:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:36.314 13:49:27 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.314 13:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.314 13:49:27 -- common/autotest_common.sh@10 -- # set +x 00:07:36.314 [2024-07-23 13:49:27.090280] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3897962 has claimed it. 00:07:36.314 request: 00:07:36.314 { 00:07:36.314 "method": "framework_enable_cpumask_locks", 00:07:36.314 "req_id": 1 00:07:36.314 } 00:07:36.314 Got JSON-RPC error response 00:07:36.314 response: 00:07:36.314 { 00:07:36.314 "code": -32603, 00:07:36.314 "message": "Failed to claim CPU core: 2" 00:07:36.314 } 00:07:36.314 13:49:27 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:07:36.314 13:49:27 -- common/autotest_common.sh@643 -- # es=1 00:07:36.314 13:49:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:36.314 13:49:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:36.314 13:49:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:36.314 13:49:27 -- event/cpu_locks.sh@158 -- # waitforlisten 3897962 /var/tmp/spdk.sock 00:07:36.314 13:49:27 -- common/autotest_common.sh@819 -- # '[' -z 3897962 ']' 00:07:36.314 13:49:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.314 13:49:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.314 13:49:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.314 13:49:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.314 13:49:27 -- common/autotest_common.sh@10 -- # set +x 00:07:36.574 13:49:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.574 13:49:27 -- common/autotest_common.sh@852 -- # return 0 00:07:36.574 13:49:27 -- event/cpu_locks.sh@159 -- # waitforlisten 3898144 /var/tmp/spdk2.sock 00:07:36.574 13:49:27 -- common/autotest_common.sh@819 -- # '[' -z 3898144 ']' 00:07:36.574 13:49:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.574 13:49:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.574 13:49:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.574 13:49:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.574 13:49:27 -- common/autotest_common.sh@10 -- # set +x 00:07:36.833 13:49:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.833 13:49:27 -- common/autotest_common.sh@852 -- # return 0 00:07:36.833 13:49:27 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:36.833 13:49:27 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:36.833 13:49:27 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:36.833 13:49:27 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:36.833 00:07:36.833 real 0m2.436s 00:07:36.833 user 0m1.127s 00:07:36.833 sys 0m0.233s 00:07:36.833 13:49:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.833 13:49:27 -- common/autotest_common.sh@10 -- # set +x 00:07:36.833 ************************************ 00:07:36.833 END TEST locking_overlapped_coremask_via_rpc 00:07:36.833 ************************************ 00:07:36.833 13:49:27 -- event/cpu_locks.sh@174 -- # cleanup 00:07:36.833 13:49:27 -- event/cpu_locks.sh@15 -- # [[ -z 3897962 ]] 00:07:36.833 13:49:27 -- event/cpu_locks.sh@15 -- # killprocess 3897962 00:07:36.833 13:49:27 -- common/autotest_common.sh@926 -- # '[' -z 3897962 ']' 00:07:36.833 13:49:27 -- common/autotest_common.sh@930 -- # kill -0 3897962 00:07:36.833 13:49:27 -- common/autotest_common.sh@931 -- # uname 00:07:36.833 13:49:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:36.833 13:49:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3897962 00:07:36.833 13:49:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:36.833 13:49:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:36.833 13:49:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3897962' 00:07:36.833 killing process with pid 3897962 00:07:36.833 13:49:27 -- common/autotest_common.sh@945 -- # kill 3897962 00:07:36.833 13:49:27 -- common/autotest_common.sh@950 -- # wait 3897962 00:07:37.099 13:49:28 -- event/cpu_locks.sh@16 -- # [[ -z 3898144 ]] 00:07:37.099 13:49:28 -- event/cpu_locks.sh@16 -- # killprocess 3898144 00:07:37.099 13:49:28 -- common/autotest_common.sh@926 -- # '[' -z 3898144 ']' 00:07:37.099 13:49:28 -- common/autotest_common.sh@930 -- # kill -0 3898144 00:07:37.099 13:49:28 -- common/autotest_common.sh@931 -- # uname 00:07:37.099 13:49:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:37.099 13:49:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3898144 00:07:37.414 13:49:28 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:37.414 13:49:28 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:37.414 13:49:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3898144' 00:07:37.414 killing process with pid 3898144 00:07:37.414 13:49:28 -- common/autotest_common.sh@945 -- # kill 3898144 00:07:37.414 13:49:28 -- common/autotest_common.sh@950 -- # wait 3898144 00:07:37.674 13:49:28 -- event/cpu_locks.sh@18 -- # rm -f 00:07:37.674 13:49:28 -- event/cpu_locks.sh@1 -- # cleanup 00:07:37.674 13:49:28 -- event/cpu_locks.sh@15 -- # [[ -z 3897962 ]] 00:07:37.674 13:49:28 -- event/cpu_locks.sh@15 -- # killprocess 3897962 00:07:37.674 13:49:28 -- common/autotest_common.sh@926 -- # '[' -z 3897962 ']' 00:07:37.674 13:49:28 -- common/autotest_common.sh@930 -- # kill -0 3897962 00:07:37.674 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3897962) - No such process 00:07:37.674 13:49:28 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3897962 is not found' 00:07:37.674 Process with pid 3897962 is not found 00:07:37.674 13:49:28 -- event/cpu_locks.sh@16 -- # [[ -z 3898144 ]] 00:07:37.674 13:49:28 -- event/cpu_locks.sh@16 -- # killprocess 3898144 00:07:37.674 13:49:28 -- common/autotest_common.sh@926 -- # '[' -z 3898144 ']' 00:07:37.674 13:49:28 -- common/autotest_common.sh@930 -- # kill -0 3898144 00:07:37.674 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3898144) - No such process 00:07:37.674 13:49:28 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3898144 is not found' 00:07:37.674 Process with pid 3898144 is not found 00:07:37.674 13:49:28 -- event/cpu_locks.sh@18 -- # rm -f 00:07:37.674 00:07:37.674 real 0m20.855s 00:07:37.674 user 0m35.007s 00:07:37.674 sys 0m7.096s 00:07:37.674 13:49:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.674 13:49:28 -- common/autotest_common.sh@10 -- # set +x 00:07:37.674 ************************************ 00:07:37.674 END TEST cpu_locks 00:07:37.674 ************************************ 00:07:37.674 00:07:37.674 real 0m48.434s 00:07:37.674 user 1m30.970s 00:07:37.674 sys 0m12.188s 00:07:37.674 13:49:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.674 13:49:28 -- common/autotest_common.sh@10 -- # set +x 00:07:37.674 ************************************ 00:07:37.674 END TEST event 00:07:37.674 ************************************ 00:07:37.674 13:49:28 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:37.674 13:49:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:37.674 13:49:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.674 13:49:28 -- common/autotest_common.sh@10 -- # set +x 00:07:37.674 ************************************ 00:07:37.674 START TEST thread 00:07:37.674 ************************************ 00:07:37.674 13:49:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:07:37.674 * Looking for test storage... 00:07:37.674 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:07:37.674 13:49:28 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:37.674 13:49:28 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:37.674 13:49:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.674 13:49:28 -- common/autotest_common.sh@10 -- # set +x 00:07:37.674 ************************************ 00:07:37.674 START TEST thread_poller_perf 00:07:37.674 ************************************ 00:07:37.674 13:49:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:37.934 [2024-07-23 13:49:28.707024] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:37.934 [2024-07-23 13:49:28.707115] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898591 ] 00:07:37.934 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.934 [2024-07-23 13:49:28.831008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.934 [2024-07-23 13:49:28.927949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.934 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:39.312 ====================================== 00:07:39.312 busy:2309840082 (cyc) 00:07:39.312 total_run_count: 506000 00:07:39.312 tsc_hz: 2300000000 (cyc) 00:07:39.312 ====================================== 00:07:39.312 poller_cost: 4564 (cyc), 1984 (nsec) 00:07:39.312 00:07:39.312 real 0m1.329s 00:07:39.312 user 0m1.178s 00:07:39.312 sys 0m0.144s 00:07:39.312 13:49:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.312 13:49:30 -- common/autotest_common.sh@10 -- # set +x 00:07:39.312 ************************************ 00:07:39.312 END TEST thread_poller_perf 00:07:39.312 ************************************ 00:07:39.312 13:49:30 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:39.312 13:49:30 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:39.312 13:49:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.312 13:49:30 -- common/autotest_common.sh@10 -- # set +x 00:07:39.312 ************************************ 00:07:39.312 START TEST thread_poller_perf 00:07:39.312 ************************************ 00:07:39.312 13:49:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:39.312 [2024-07-23 13:49:30.081677] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:39.312 [2024-07-23 13:49:30.081776] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898793 ] 00:07:39.312 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.312 [2024-07-23 13:49:30.202270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.312 [2024-07-23 13:49:30.299397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.312 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:40.692 ====================================== 00:07:40.693 busy:2302618824 (cyc) 00:07:40.693 total_run_count: 8794000 00:07:40.693 tsc_hz: 2300000000 (cyc) 00:07:40.693 ====================================== 00:07:40.693 poller_cost: 261 (cyc), 113 (nsec) 00:07:40.693 00:07:40.693 real 0m1.320s 00:07:40.693 user 0m1.176s 00:07:40.693 sys 0m0.137s 00:07:40.693 13:49:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.693 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.693 ************************************ 00:07:40.693 END TEST thread_poller_perf 00:07:40.693 ************************************ 00:07:40.693 13:49:31 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:40.693 13:49:31 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:40.693 13:49:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.693 13:49:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.693 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.693 ************************************ 00:07:40.693 START TEST thread_spdk_lock 00:07:40.693 ************************************ 00:07:40.693 13:49:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:07:40.693 [2024-07-23 13:49:31.442607] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:40.693 [2024-07-23 13:49:31.442692] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898990 ] 00:07:40.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.693 [2024-07-23 13:49:31.563625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.693 [2024-07-23 13:49:31.662031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.693 [2024-07-23 13:49:31.662036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.262 [2024-07-23 13:49:32.171175] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:41.262 [2024-07-23 13:49:32.171231] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:41.262 [2024-07-23 13:49:32.171248] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x149c080 00:07:41.262 [2024-07-23 13:49:32.172266] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:41.262 [2024-07-23 13:49:32.172373] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:41.262 [2024-07-23 13:49:32.172409] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:41.262 Starting test contend 00:07:41.262 Worker Delay Wait us Hold us Total us 00:07:41.262 0 3 154590 193386 347976 00:07:41.262 1 5 82637 291811 374448 00:07:41.262 PASS test contend 00:07:41.262 Starting test hold_by_poller 00:07:41.262 PASS test hold_by_poller 00:07:41.262 Starting test hold_by_message 00:07:41.262 PASS test hold_by_message 00:07:41.262 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:07:41.262 100014 assertions passed 00:07:41.262 0 assertions failed 00:07:41.262 00:07:41.262 real 0m0.822s 00:07:41.262 user 0m1.197s 00:07:41.262 sys 0m0.131s 00:07:41.262 13:49:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.262 13:49:32 -- common/autotest_common.sh@10 -- # set +x 00:07:41.262 ************************************ 00:07:41.262 END TEST thread_spdk_lock 00:07:41.262 ************************************ 00:07:41.522 00:07:41.522 real 0m3.706s 00:07:41.522 user 0m3.633s 00:07:41.522 sys 0m0.599s 00:07:41.522 13:49:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.522 13:49:32 -- common/autotest_common.sh@10 -- # set +x 00:07:41.522 ************************************ 00:07:41.522 END TEST thread 00:07:41.522 ************************************ 00:07:41.522 13:49:32 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:07:41.522 13:49:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.522 13:49:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.522 13:49:32 -- common/autotest_common.sh@10 -- # set +x 00:07:41.522 ************************************ 00:07:41.522 START TEST accel 00:07:41.522 ************************************ 00:07:41.522 13:49:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:07:41.522 * Looking for test storage... 00:07:41.522 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:41.522 13:49:32 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:41.522 13:49:32 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:41.522 13:49:32 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:41.522 13:49:32 -- accel/accel.sh@59 -- # spdk_tgt_pid=3899161 00:07:41.522 13:49:32 -- accel/accel.sh@60 -- # waitforlisten 3899161 00:07:41.522 13:49:32 -- common/autotest_common.sh@819 -- # '[' -z 3899161 ']' 00:07:41.522 13:49:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.522 13:49:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:41.522 13:49:32 -- accel/accel.sh@58 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:41.522 13:49:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.522 13:49:32 -- accel/accel.sh@58 -- # build_accel_config 00:07:41.522 13:49:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:41.522 13:49:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.522 13:49:32 -- common/autotest_common.sh@10 -- # set +x 00:07:41.522 13:49:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.522 13:49:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.522 13:49:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.522 13:49:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.522 13:49:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.522 13:49:32 -- accel/accel.sh@42 -- # jq -r . 00:07:41.522 [2024-07-23 13:49:32.461362] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:41.522 [2024-07-23 13:49:32.461436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899161 ] 00:07:41.522 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.782 [2024-07-23 13:49:32.580231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.782 [2024-07-23 13:49:32.676468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:41.782 [2024-07-23 13:49:32.676616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.351 13:49:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:42.351 13:49:33 -- common/autotest_common.sh@852 -- # return 0 00:07:42.351 13:49:33 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:42.351 13:49:33 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:42.351 13:49:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.351 13:49:33 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:42.351 13:49:33 -- common/autotest_common.sh@10 -- # set +x 00:07:42.351 13:49:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # IFS== 00:07:42.610 13:49:33 -- accel/accel.sh@64 -- # read -r opc module 00:07:42.610 13:49:33 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:42.610 13:49:33 -- accel/accel.sh@67 -- # killprocess 3899161 00:07:42.610 13:49:33 -- common/autotest_common.sh@926 -- # '[' -z 3899161 ']' 00:07:42.610 13:49:33 -- common/autotest_common.sh@930 -- # kill -0 3899161 00:07:42.610 13:49:33 -- common/autotest_common.sh@931 -- # uname 00:07:42.610 13:49:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:42.610 13:49:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3899161 00:07:42.610 13:49:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:42.610 13:49:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:42.610 13:49:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3899161' 00:07:42.610 killing process with pid 3899161 00:07:42.610 13:49:33 -- common/autotest_common.sh@945 -- # kill 3899161 00:07:42.610 13:49:33 -- common/autotest_common.sh@950 -- # wait 3899161 00:07:42.870 13:49:33 -- accel/accel.sh@68 -- # trap - ERR 00:07:42.870 13:49:33 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:42.870 13:49:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:42.870 13:49:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.870 13:49:33 -- common/autotest_common.sh@10 -- # set +x 00:07:42.870 13:49:33 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:07:42.870 13:49:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:42.870 13:49:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.870 13:49:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.870 13:49:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.870 13:49:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.870 13:49:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.870 13:49:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.870 13:49:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.870 13:49:33 -- accel/accel.sh@42 -- # jq -r . 00:07:42.870 13:49:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.870 13:49:33 -- common/autotest_common.sh@10 -- # set +x 00:07:42.870 13:49:33 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:42.870 13:49:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:42.870 13:49:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.870 13:49:33 -- common/autotest_common.sh@10 -- # set +x 00:07:42.870 ************************************ 00:07:42.870 START TEST accel_missing_filename 00:07:42.870 ************************************ 00:07:42.870 13:49:33 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:07:42.870 13:49:33 -- common/autotest_common.sh@640 -- # local es=0 00:07:42.870 13:49:33 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:42.870 13:49:33 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:42.870 13:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.870 13:49:33 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:42.870 13:49:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.870 13:49:33 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:07:42.870 13:49:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:42.870 13:49:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.870 13:49:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.870 13:49:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.870 13:49:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.870 13:49:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.870 13:49:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.870 13:49:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.870 13:49:33 -- accel/accel.sh@42 -- # jq -r . 00:07:43.129 [2024-07-23 13:49:33.903015] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:43.129 [2024-07-23 13:49:33.903105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899434 ] 00:07:43.129 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.129 [2024-07-23 13:49:34.027102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.129 [2024-07-23 13:49:34.125017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.389 [2024-07-23 13:49:34.174767] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.389 [2024-07-23 13:49:34.247245] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:43.389 A filename is required. 00:07:43.389 13:49:34 -- common/autotest_common.sh@643 -- # es=234 00:07:43.389 13:49:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:43.389 13:49:34 -- common/autotest_common.sh@652 -- # es=106 00:07:43.389 13:49:34 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:43.389 13:49:34 -- common/autotest_common.sh@660 -- # es=1 00:07:43.389 13:49:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:43.389 00:07:43.389 real 0m0.457s 00:07:43.389 user 0m0.313s 00:07:43.389 sys 0m0.184s 00:07:43.389 13:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.389 13:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:43.389 ************************************ 00:07:43.389 END TEST accel_missing_filename 00:07:43.389 ************************************ 00:07:43.389 13:49:34 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:43.389 13:49:34 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:43.389 13:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.389 13:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:43.389 ************************************ 00:07:43.389 START TEST accel_compress_verify 00:07:43.389 ************************************ 00:07:43.389 13:49:34 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:43.390 13:49:34 -- common/autotest_common.sh@640 -- # local es=0 00:07:43.390 13:49:34 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:43.390 13:49:34 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:43.390 13:49:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:43.390 13:49:34 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:43.390 13:49:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:43.390 13:49:34 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:43.390 13:49:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:07:43.390 13:49:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.390 13:49:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.390 13:49:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.390 13:49:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.390 13:49:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.390 13:49:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.390 13:49:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.390 13:49:34 -- accel/accel.sh@42 -- # jq -r . 00:07:43.390 [2024-07-23 13:49:34.406712] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:43.390 [2024-07-23 13:49:34.406805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899462 ] 00:07:43.649 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.649 [2024-07-23 13:49:34.529476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.649 [2024-07-23 13:49:34.626380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.909 [2024-07-23 13:49:34.676379] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.909 [2024-07-23 13:49:34.748877] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:43.909 00:07:43.909 Compression does not support the verify option, aborting. 00:07:43.909 13:49:34 -- common/autotest_common.sh@643 -- # es=161 00:07:43.909 13:49:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:43.909 13:49:34 -- common/autotest_common.sh@652 -- # es=33 00:07:43.909 13:49:34 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:43.909 13:49:34 -- common/autotest_common.sh@660 -- # es=1 00:07:43.909 13:49:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:43.909 00:07:43.909 real 0m0.453s 00:07:43.909 user 0m0.309s 00:07:43.909 sys 0m0.185s 00:07:43.909 13:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.909 13:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:43.909 ************************************ 00:07:43.909 END TEST accel_compress_verify 00:07:43.909 ************************************ 00:07:43.909 13:49:34 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:43.909 13:49:34 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:43.909 13:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.909 13:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:43.909 ************************************ 00:07:43.909 START TEST accel_wrong_workload 00:07:43.909 ************************************ 00:07:43.909 13:49:34 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:07:43.909 13:49:34 -- common/autotest_common.sh@640 -- # local es=0 00:07:43.909 13:49:34 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:43.909 13:49:34 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:43.909 13:49:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:43.909 13:49:34 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:43.909 13:49:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:43.909 13:49:34 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:07:43.909 13:49:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:43.909 13:49:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.909 13:49:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.909 13:49:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.909 13:49:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.909 13:49:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.909 13:49:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.909 13:49:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.909 13:49:34 -- accel/accel.sh@42 -- # jq -r . 00:07:43.909 Unsupported workload type: foobar 00:07:43.909 [2024-07-23 13:49:34.906007] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:43.909 accel_perf options: 00:07:43.909 [-h help message] 00:07:43.909 [-q queue depth per core] 00:07:43.909 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:43.909 [-T number of threads per core 00:07:43.909 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:43.909 [-t time in seconds] 00:07:43.909 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:43.909 [ dif_verify, , dif_generate, dif_generate_copy 00:07:43.909 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:43.909 [-l for compress/decompress workloads, name of uncompressed input file 00:07:43.909 [-S for crc32c workload, use this seed value (default 0) 00:07:43.909 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:43.909 [-f for fill workload, use this BYTE value (default 255) 00:07:43.909 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:43.909 [-y verify result if this switch is on] 00:07:43.909 [-a tasks to allocate per core (default: same value as -q)] 00:07:43.909 Can be used to spread operations across a wider range of memory. 00:07:43.909 13:49:34 -- common/autotest_common.sh@643 -- # es=1 00:07:43.909 13:49:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:43.909 13:49:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:43.909 13:49:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:43.909 00:07:43.909 real 0m0.029s 00:07:43.909 user 0m0.013s 00:07:43.909 sys 0m0.017s 00:07:43.909 13:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.909 13:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:43.909 ************************************ 00:07:43.909 END TEST accel_wrong_workload 00:07:43.909 ************************************ 00:07:43.909 Error: writing output failed: Broken pipe 00:07:44.169 13:49:34 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:44.169 13:49:34 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:44.169 13:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.169 13:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:44.169 ************************************ 00:07:44.169 START TEST accel_negative_buffers 00:07:44.169 ************************************ 00:07:44.169 13:49:34 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:44.169 13:49:34 -- common/autotest_common.sh@640 -- # local es=0 00:07:44.169 13:49:34 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:44.169 13:49:34 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:44.169 13:49:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.169 13:49:34 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:44.169 13:49:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.169 13:49:34 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:07:44.169 13:49:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:44.169 13:49:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.169 13:49:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.169 13:49:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.169 13:49:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.169 13:49:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.169 13:49:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.169 13:49:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.169 13:49:34 -- accel/accel.sh@42 -- # jq -r . 00:07:44.169 -x option must be non-negative. 00:07:44.169 [2024-07-23 13:49:34.981017] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:44.169 accel_perf options: 00:07:44.169 [-h help message] 00:07:44.169 [-q queue depth per core] 00:07:44.169 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:44.169 [-T number of threads per core 00:07:44.169 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:44.169 [-t time in seconds] 00:07:44.169 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:44.169 [ dif_verify, , dif_generate, dif_generate_copy 00:07:44.169 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:44.169 [-l for compress/decompress workloads, name of uncompressed input file 00:07:44.169 [-S for crc32c workload, use this seed value (default 0) 00:07:44.169 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:44.169 [-f for fill workload, use this BYTE value (default 255) 00:07:44.169 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:44.169 [-y verify result if this switch is on] 00:07:44.169 [-a tasks to allocate per core (default: same value as -q)] 00:07:44.169 Can be used to spread operations across a wider range of memory. 00:07:44.169 13:49:34 -- common/autotest_common.sh@643 -- # es=1 00:07:44.169 13:49:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:44.169 13:49:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:44.169 13:49:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:44.169 00:07:44.169 real 0m0.029s 00:07:44.169 user 0m0.014s 00:07:44.169 sys 0m0.015s 00:07:44.169 13:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.169 13:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:44.169 ************************************ 00:07:44.169 END TEST accel_negative_buffers 00:07:44.169 ************************************ 00:07:44.169 Error: writing output failed: Broken pipe 00:07:44.169 13:49:35 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:44.169 13:49:35 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:44.169 13:49:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.169 13:49:35 -- common/autotest_common.sh@10 -- # set +x 00:07:44.169 ************************************ 00:07:44.169 START TEST accel_crc32c 00:07:44.169 ************************************ 00:07:44.169 13:49:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:44.169 13:49:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.169 13:49:35 -- accel/accel.sh@17 -- # local accel_module 00:07:44.169 13:49:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:44.169 13:49:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:44.169 13:49:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.169 13:49:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.169 13:49:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.169 13:49:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.169 13:49:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.169 13:49:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.169 13:49:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.169 13:49:35 -- accel/accel.sh@42 -- # jq -r . 00:07:44.169 [2024-07-23 13:49:35.054450] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:44.169 [2024-07-23 13:49:35.054534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899594 ] 00:07:44.169 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.169 [2024-07-23 13:49:35.175843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.429 [2024-07-23 13:49:35.273652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.809 13:49:36 -- accel/accel.sh@18 -- # out=' 00:07:45.809 SPDK Configuration: 00:07:45.809 Core mask: 0x1 00:07:45.809 00:07:45.809 Accel Perf Configuration: 00:07:45.809 Workload Type: crc32c 00:07:45.809 CRC-32C seed: 32 00:07:45.809 Transfer size: 4096 bytes 00:07:45.809 Vector count 1 00:07:45.809 Module: software 00:07:45.809 Queue depth: 32 00:07:45.809 Allocate depth: 32 00:07:45.809 # threads/core: 1 00:07:45.809 Run time: 1 seconds 00:07:45.809 Verify: Yes 00:07:45.809 00:07:45.809 Running for 1 seconds... 00:07:45.809 00:07:45.809 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.809 ------------------------------------------------------------------------------------ 00:07:45.809 0,0 525888/s 2054 MiB/s 0 0 00:07:45.809 ==================================================================================== 00:07:45.809 Total 525888/s 2054 MiB/s 0 0' 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:45.809 13:49:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:45.809 13:49:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.809 13:49:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.809 13:49:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.809 13:49:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.809 13:49:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.809 13:49:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.809 13:49:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.809 13:49:36 -- accel/accel.sh@42 -- # jq -r . 00:07:45.809 [2024-07-23 13:49:36.490782] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:45.809 [2024-07-23 13:49:36.490876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899817 ] 00:07:45.809 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.809 [2024-07-23 13:49:36.609589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.809 [2024-07-23 13:49:36.707139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val= 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val= 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val=0x1 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val= 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val= 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val=crc32c 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val=32 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val= 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val=software 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val=32 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val=32 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val=1 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val=Yes 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val= 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:45.809 13:49:36 -- accel/accel.sh@21 -- # val= 00:07:45.809 13:49:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # IFS=: 00:07:45.809 13:49:36 -- accel/accel.sh@20 -- # read -r var val 00:07:47.190 13:49:37 -- accel/accel.sh@21 -- # val= 00:07:47.190 13:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.190 13:49:37 -- accel/accel.sh@21 -- # val= 00:07:47.190 13:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.190 13:49:37 -- accel/accel.sh@21 -- # val= 00:07:47.190 13:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.190 13:49:37 -- accel/accel.sh@21 -- # val= 00:07:47.190 13:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.190 13:49:37 -- accel/accel.sh@21 -- # val= 00:07:47.190 13:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.190 13:49:37 -- accel/accel.sh@21 -- # val= 00:07:47.190 13:49:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # IFS=: 00:07:47.190 13:49:37 -- accel/accel.sh@20 -- # read -r var val 00:07:47.190 13:49:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.190 13:49:37 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:47.190 13:49:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.190 00:07:47.190 real 0m2.887s 00:07:47.190 user 0m2.518s 00:07:47.190 sys 0m0.373s 00:07:47.190 13:49:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.190 13:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:47.190 ************************************ 00:07:47.190 END TEST accel_crc32c 00:07:47.190 ************************************ 00:07:47.190 13:49:37 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:47.190 13:49:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:47.190 13:49:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.190 13:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:47.190 ************************************ 00:07:47.190 START TEST accel_crc32c_C2 00:07:47.190 ************************************ 00:07:47.190 13:49:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:47.190 13:49:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.190 13:49:37 -- accel/accel.sh@17 -- # local accel_module 00:07:47.190 13:49:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:47.190 13:49:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:47.190 13:49:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.190 13:49:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.190 13:49:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.190 13:49:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.190 13:49:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.190 13:49:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.190 13:49:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.190 13:49:37 -- accel/accel.sh@42 -- # jq -r . 00:07:47.190 [2024-07-23 13:49:37.984107] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:47.190 [2024-07-23 13:49:37.984201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900064 ] 00:07:47.190 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.191 [2024-07-23 13:49:38.103112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.191 [2024-07-23 13:49:38.199966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.571 13:49:39 -- accel/accel.sh@18 -- # out=' 00:07:48.571 SPDK Configuration: 00:07:48.571 Core mask: 0x1 00:07:48.571 00:07:48.571 Accel Perf Configuration: 00:07:48.571 Workload Type: crc32c 00:07:48.571 CRC-32C seed: 0 00:07:48.571 Transfer size: 4096 bytes 00:07:48.571 Vector count 2 00:07:48.571 Module: software 00:07:48.571 Queue depth: 32 00:07:48.571 Allocate depth: 32 00:07:48.571 # threads/core: 1 00:07:48.571 Run time: 1 seconds 00:07:48.571 Verify: Yes 00:07:48.571 00:07:48.571 Running for 1 seconds... 00:07:48.571 00:07:48.571 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:48.571 ------------------------------------------------------------------------------------ 00:07:48.571 0,0 383456/s 2995 MiB/s 0 0 00:07:48.571 ==================================================================================== 00:07:48.571 Total 383456/s 1497 MiB/s 0 0' 00:07:48.571 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.571 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.571 13:49:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:48.571 13:49:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:48.572 13:49:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.572 13:49:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.572 13:49:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.572 13:49:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.572 13:49:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.572 13:49:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.572 13:49:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.572 13:49:39 -- accel/accel.sh@42 -- # jq -r . 00:07:48.572 [2024-07-23 13:49:39.433506] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:48.572 [2024-07-23 13:49:39.433601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900250 ] 00:07:48.572 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.572 [2024-07-23 13:49:39.553371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.831 [2024-07-23 13:49:39.650498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.831 13:49:39 -- accel/accel.sh@21 -- # val= 00:07:48.831 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.831 13:49:39 -- accel/accel.sh@21 -- # val= 00:07:48.831 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.831 13:49:39 -- accel/accel.sh@21 -- # val=0x1 00:07:48.831 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.831 13:49:39 -- accel/accel.sh@21 -- # val= 00:07:48.831 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.831 13:49:39 -- accel/accel.sh@21 -- # val= 00:07:48.831 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.831 13:49:39 -- accel/accel.sh@21 -- # val=crc32c 00:07:48.831 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.831 13:49:39 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.831 13:49:39 -- accel/accel.sh@21 -- # val=0 00:07:48.831 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.831 13:49:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:48.831 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.831 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.831 13:49:39 -- accel/accel.sh@21 -- # val= 00:07:48.832 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.832 13:49:39 -- accel/accel.sh@21 -- # val=software 00:07:48.832 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.832 13:49:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.832 13:49:39 -- accel/accel.sh@21 -- # val=32 00:07:48.832 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.832 13:49:39 -- accel/accel.sh@21 -- # val=32 00:07:48.832 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.832 13:49:39 -- accel/accel.sh@21 -- # val=1 00:07:48.832 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.832 13:49:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:48.832 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.832 13:49:39 -- accel/accel.sh@21 -- # val=Yes 00:07:48.832 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.832 13:49:39 -- accel/accel.sh@21 -- # val= 00:07:48.832 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:48.832 13:49:39 -- accel/accel.sh@21 -- # val= 00:07:48.832 13:49:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # IFS=: 00:07:48.832 13:49:39 -- accel/accel.sh@20 -- # read -r var val 00:07:50.212 13:49:40 -- accel/accel.sh@21 -- # val= 00:07:50.212 13:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.212 13:49:40 -- accel/accel.sh@21 -- # val= 00:07:50.212 13:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.212 13:49:40 -- accel/accel.sh@21 -- # val= 00:07:50.212 13:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.212 13:49:40 -- accel/accel.sh@21 -- # val= 00:07:50.212 13:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.212 13:49:40 -- accel/accel.sh@21 -- # val= 00:07:50.212 13:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.212 13:49:40 -- accel/accel.sh@21 -- # val= 00:07:50.212 13:49:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # IFS=: 00:07:50.212 13:49:40 -- accel/accel.sh@20 -- # read -r var val 00:07:50.212 13:49:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:50.212 13:49:40 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:50.212 13:49:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.212 00:07:50.212 real 0m2.905s 00:07:50.212 user 0m2.540s 00:07:50.212 sys 0m0.366s 00:07:50.212 13:49:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.212 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:50.212 ************************************ 00:07:50.212 END TEST accel_crc32c_C2 00:07:50.212 ************************************ 00:07:50.212 13:49:40 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:50.212 13:49:40 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:50.212 13:49:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.212 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:50.212 ************************************ 00:07:50.212 START TEST accel_copy 00:07:50.212 ************************************ 00:07:50.212 13:49:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:50.212 13:49:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.212 13:49:40 -- accel/accel.sh@17 -- # local accel_module 00:07:50.212 13:49:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:50.212 13:49:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:50.212 13:49:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.212 13:49:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.212 13:49:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.212 13:49:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.212 13:49:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.212 13:49:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.212 13:49:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.212 13:49:40 -- accel/accel.sh@42 -- # jq -r . 00:07:50.212 [2024-07-23 13:49:40.934851] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:50.212 [2024-07-23 13:49:40.934943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900446 ] 00:07:50.212 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.212 [2024-07-23 13:49:41.055771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.212 [2024-07-23 13:49:41.152647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.591 13:49:42 -- accel/accel.sh@18 -- # out=' 00:07:51.591 SPDK Configuration: 00:07:51.591 Core mask: 0x1 00:07:51.591 00:07:51.591 Accel Perf Configuration: 00:07:51.591 Workload Type: copy 00:07:51.591 Transfer size: 4096 bytes 00:07:51.591 Vector count 1 00:07:51.591 Module: software 00:07:51.591 Queue depth: 32 00:07:51.591 Allocate depth: 32 00:07:51.591 # threads/core: 1 00:07:51.591 Run time: 1 seconds 00:07:51.591 Verify: Yes 00:07:51.591 00:07:51.591 Running for 1 seconds... 00:07:51.591 00:07:51.591 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:51.591 ------------------------------------------------------------------------------------ 00:07:51.591 0,0 344352/s 1345 MiB/s 0 0 00:07:51.591 ==================================================================================== 00:07:51.591 Total 344352/s 1345 MiB/s 0 0' 00:07:51.591 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.591 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.591 13:49:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:51.591 13:49:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:51.591 13:49:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.591 13:49:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.591 13:49:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.591 13:49:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.591 13:49:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.591 13:49:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.591 13:49:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.591 13:49:42 -- accel/accel.sh@42 -- # jq -r . 00:07:51.591 [2024-07-23 13:49:42.386582] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:51.591 [2024-07-23 13:49:42.386681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900630 ] 00:07:51.591 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.591 [2024-07-23 13:49:42.507286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.591 [2024-07-23 13:49:42.603900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val= 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val= 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val=0x1 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val= 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val= 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val=copy 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val= 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val=software 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val=32 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val=32 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val=1 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.850 13:49:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:51.850 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.850 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.851 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.851 13:49:42 -- accel/accel.sh@21 -- # val=Yes 00:07:51.851 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.851 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.851 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.851 13:49:42 -- accel/accel.sh@21 -- # val= 00:07:51.851 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.851 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.851 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:51.851 13:49:42 -- accel/accel.sh@21 -- # val= 00:07:51.851 13:49:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.851 13:49:42 -- accel/accel.sh@20 -- # IFS=: 00:07:51.851 13:49:42 -- accel/accel.sh@20 -- # read -r var val 00:07:53.231 13:49:43 -- accel/accel.sh@21 -- # val= 00:07:53.231 13:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # IFS=: 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # read -r var val 00:07:53.231 13:49:43 -- accel/accel.sh@21 -- # val= 00:07:53.231 13:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # IFS=: 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # read -r var val 00:07:53.231 13:49:43 -- accel/accel.sh@21 -- # val= 00:07:53.231 13:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # IFS=: 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # read -r var val 00:07:53.231 13:49:43 -- accel/accel.sh@21 -- # val= 00:07:53.231 13:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # IFS=: 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # read -r var val 00:07:53.231 13:49:43 -- accel/accel.sh@21 -- # val= 00:07:53.231 13:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # IFS=: 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # read -r var val 00:07:53.231 13:49:43 -- accel/accel.sh@21 -- # val= 00:07:53.231 13:49:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # IFS=: 00:07:53.231 13:49:43 -- accel/accel.sh@20 -- # read -r var val 00:07:53.231 13:49:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:53.231 13:49:43 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:53.231 13:49:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.231 00:07:53.231 real 0m2.909s 00:07:53.231 user 0m2.533s 00:07:53.231 sys 0m0.377s 00:07:53.231 13:49:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.231 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:07:53.231 ************************************ 00:07:53.231 END TEST accel_copy 00:07:53.231 ************************************ 00:07:53.231 13:49:43 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.231 13:49:43 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:53.231 13:49:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.231 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:07:53.231 ************************************ 00:07:53.231 START TEST accel_fill 00:07:53.231 ************************************ 00:07:53.231 13:49:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.231 13:49:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:53.231 13:49:43 -- accel/accel.sh@17 -- # local accel_module 00:07:53.231 13:49:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.231 13:49:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:53.231 13:49:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.231 13:49:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.231 13:49:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.231 13:49:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.231 13:49:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.231 13:49:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.231 13:49:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.231 13:49:43 -- accel/accel.sh@42 -- # jq -r . 00:07:53.231 [2024-07-23 13:49:43.886871] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:53.231 [2024-07-23 13:49:43.886963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900825 ] 00:07:53.231 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.231 [2024-07-23 13:49:44.008371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.231 [2024-07-23 13:49:44.103335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.643 13:49:45 -- accel/accel.sh@18 -- # out=' 00:07:54.643 SPDK Configuration: 00:07:54.643 Core mask: 0x1 00:07:54.643 00:07:54.643 Accel Perf Configuration: 00:07:54.643 Workload Type: fill 00:07:54.643 Fill pattern: 0x80 00:07:54.643 Transfer size: 4096 bytes 00:07:54.643 Vector count 1 00:07:54.643 Module: software 00:07:54.643 Queue depth: 64 00:07:54.643 Allocate depth: 64 00:07:54.643 # threads/core: 1 00:07:54.643 Run time: 1 seconds 00:07:54.643 Verify: Yes 00:07:54.643 00:07:54.643 Running for 1 seconds... 00:07:54.643 00:07:54.643 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:54.643 ------------------------------------------------------------------------------------ 00:07:54.643 0,0 606208/s 2368 MiB/s 0 0 00:07:54.643 ==================================================================================== 00:07:54.643 Total 606208/s 2368 MiB/s 0 0' 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:54.643 13:49:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:54.643 13:49:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.643 13:49:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.643 13:49:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.643 13:49:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.643 13:49:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.643 13:49:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.643 13:49:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.643 13:49:45 -- accel/accel.sh@42 -- # jq -r . 00:07:54.643 [2024-07-23 13:49:45.329511] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:54.643 [2024-07-23 13:49:45.329606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901018 ] 00:07:54.643 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.643 [2024-07-23 13:49:45.450285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.643 [2024-07-23 13:49:45.543055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val= 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val= 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val=0x1 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val= 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val= 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val=fill 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val=0x80 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val= 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val=software 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val=64 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val=64 00:07:54.643 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.643 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.643 13:49:45 -- accel/accel.sh@21 -- # val=1 00:07:54.644 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.644 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.644 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.644 13:49:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:54.644 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.644 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.644 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.644 13:49:45 -- accel/accel.sh@21 -- # val=Yes 00:07:54.644 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.644 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.644 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.644 13:49:45 -- accel/accel.sh@21 -- # val= 00:07:54.644 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.644 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.644 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:54.644 13:49:45 -- accel/accel.sh@21 -- # val= 00:07:54.644 13:49:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.644 13:49:45 -- accel/accel.sh@20 -- # IFS=: 00:07:54.644 13:49:45 -- accel/accel.sh@20 -- # read -r var val 00:07:56.024 13:49:46 -- accel/accel.sh@21 -- # val= 00:07:56.024 13:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # IFS=: 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # read -r var val 00:07:56.024 13:49:46 -- accel/accel.sh@21 -- # val= 00:07:56.024 13:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # IFS=: 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # read -r var val 00:07:56.024 13:49:46 -- accel/accel.sh@21 -- # val= 00:07:56.024 13:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # IFS=: 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # read -r var val 00:07:56.024 13:49:46 -- accel/accel.sh@21 -- # val= 00:07:56.024 13:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # IFS=: 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # read -r var val 00:07:56.024 13:49:46 -- accel/accel.sh@21 -- # val= 00:07:56.024 13:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # IFS=: 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # read -r var val 00:07:56.024 13:49:46 -- accel/accel.sh@21 -- # val= 00:07:56.024 13:49:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # IFS=: 00:07:56.024 13:49:46 -- accel/accel.sh@20 -- # read -r var val 00:07:56.024 13:49:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:56.024 13:49:46 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:56.024 13:49:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.024 00:07:56.024 real 0m2.879s 00:07:56.024 user 0m2.534s 00:07:56.024 sys 0m0.347s 00:07:56.024 13:49:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.024 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:07:56.024 ************************************ 00:07:56.024 END TEST accel_fill 00:07:56.024 ************************************ 00:07:56.024 13:49:46 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:56.024 13:49:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:56.024 13:49:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.024 13:49:46 -- common/autotest_common.sh@10 -- # set +x 00:07:56.024 ************************************ 00:07:56.024 START TEST accel_copy_crc32c 00:07:56.024 ************************************ 00:07:56.024 13:49:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:56.024 13:49:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:56.024 13:49:46 -- accel/accel.sh@17 -- # local accel_module 00:07:56.024 13:49:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:56.024 13:49:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:56.024 13:49:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.024 13:49:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.024 13:49:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.024 13:49:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.024 13:49:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.024 13:49:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.024 13:49:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.024 13:49:46 -- accel/accel.sh@42 -- # jq -r . 00:07:56.024 [2024-07-23 13:49:46.811055] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:56.024 [2024-07-23 13:49:46.811147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901213 ] 00:07:56.024 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.024 [2024-07-23 13:49:46.931166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.024 [2024-07-23 13:49:47.027742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.405 13:49:48 -- accel/accel.sh@18 -- # out=' 00:07:57.405 SPDK Configuration: 00:07:57.405 Core mask: 0x1 00:07:57.405 00:07:57.405 Accel Perf Configuration: 00:07:57.405 Workload Type: copy_crc32c 00:07:57.405 CRC-32C seed: 0 00:07:57.405 Vector size: 4096 bytes 00:07:57.405 Transfer size: 4096 bytes 00:07:57.405 Vector count 1 00:07:57.405 Module: software 00:07:57.405 Queue depth: 32 00:07:57.405 Allocate depth: 32 00:07:57.405 # threads/core: 1 00:07:57.405 Run time: 1 seconds 00:07:57.405 Verify: Yes 00:07:57.405 00:07:57.405 Running for 1 seconds... 00:07:57.405 00:07:57.405 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:57.405 ------------------------------------------------------------------------------------ 00:07:57.405 0,0 269600/s 1053 MiB/s 0 0 00:07:57.405 ==================================================================================== 00:07:57.405 Total 269600/s 1053 MiB/s 0 0' 00:07:57.405 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.405 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.405 13:49:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:57.405 13:49:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:57.405 13:49:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.405 13:49:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:57.405 13:49:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.405 13:49:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.405 13:49:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:57.405 13:49:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:57.405 13:49:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:57.405 13:49:48 -- accel/accel.sh@42 -- # jq -r . 00:07:57.405 [2024-07-23 13:49:48.248585] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:57.405 [2024-07-23 13:49:48.248668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901392 ] 00:07:57.405 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.405 [2024-07-23 13:49:48.368085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.665 [2024-07-23 13:49:48.464901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val= 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val= 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val=0x1 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val= 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val= 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.665 13:49:48 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val=0 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val= 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val=software 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.665 13:49:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.665 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.665 13:49:48 -- accel/accel.sh@21 -- # val=32 00:07:57.665 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.666 13:49:48 -- accel/accel.sh@21 -- # val=32 00:07:57.666 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.666 13:49:48 -- accel/accel.sh@21 -- # val=1 00:07:57.666 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.666 13:49:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:57.666 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.666 13:49:48 -- accel/accel.sh@21 -- # val=Yes 00:07:57.666 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.666 13:49:48 -- accel/accel.sh@21 -- # val= 00:07:57.666 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:57.666 13:49:48 -- accel/accel.sh@21 -- # val= 00:07:57.666 13:49:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # IFS=: 00:07:57.666 13:49:48 -- accel/accel.sh@20 -- # read -r var val 00:07:59.045 13:49:49 -- accel/accel.sh@21 -- # val= 00:07:59.045 13:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.045 13:49:49 -- accel/accel.sh@21 -- # val= 00:07:59.045 13:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.045 13:49:49 -- accel/accel.sh@21 -- # val= 00:07:59.045 13:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.045 13:49:49 -- accel/accel.sh@21 -- # val= 00:07:59.045 13:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.045 13:49:49 -- accel/accel.sh@21 -- # val= 00:07:59.045 13:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.045 13:49:49 -- accel/accel.sh@21 -- # val= 00:07:59.045 13:49:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # IFS=: 00:07:59.045 13:49:49 -- accel/accel.sh@20 -- # read -r var val 00:07:59.045 13:49:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.045 13:49:49 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:59.045 13:49:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.045 00:07:59.045 real 0m2.890s 00:07:59.045 user 0m2.533s 00:07:59.045 sys 0m0.361s 00:07:59.045 13:49:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.045 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:59.045 ************************************ 00:07:59.045 END TEST accel_copy_crc32c 00:07:59.045 ************************************ 00:07:59.045 13:49:49 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:59.045 13:49:49 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:59.045 13:49:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.045 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:59.046 ************************************ 00:07:59.046 START TEST accel_copy_crc32c_C2 00:07:59.046 ************************************ 00:07:59.046 13:49:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:59.046 13:49:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.046 13:49:49 -- accel/accel.sh@17 -- # local accel_module 00:07:59.046 13:49:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:59.046 13:49:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.046 13:49:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.046 13:49:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:59.046 13:49:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.046 13:49:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.046 13:49:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.046 13:49:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.046 13:49:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.046 13:49:49 -- accel/accel.sh@42 -- # jq -r . 00:07:59.046 [2024-07-23 13:49:49.746432] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:59.046 [2024-07-23 13:49:49.746525] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901596 ] 00:07:59.046 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.046 [2024-07-23 13:49:49.866524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.046 [2024-07-23 13:49:49.963725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.425 13:49:51 -- accel/accel.sh@18 -- # out=' 00:08:00.425 SPDK Configuration: 00:08:00.425 Core mask: 0x1 00:08:00.425 00:08:00.425 Accel Perf Configuration: 00:08:00.425 Workload Type: copy_crc32c 00:08:00.425 CRC-32C seed: 0 00:08:00.425 Vector size: 4096 bytes 00:08:00.425 Transfer size: 8192 bytes 00:08:00.425 Vector count 2 00:08:00.425 Module: software 00:08:00.425 Queue depth: 32 00:08:00.425 Allocate depth: 32 00:08:00.425 # threads/core: 1 00:08:00.425 Run time: 1 seconds 00:08:00.425 Verify: Yes 00:08:00.425 00:08:00.425 Running for 1 seconds... 00:08:00.425 00:08:00.425 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:00.425 ------------------------------------------------------------------------------------ 00:08:00.425 0,0 188864/s 1475 MiB/s 0 0 00:08:00.425 ==================================================================================== 00:08:00.425 Total 188864/s 737 MiB/s 0 0' 00:08:00.425 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.425 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.425 13:49:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:00.425 13:49:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:00.425 13:49:51 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.425 13:49:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:00.425 13:49:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.425 13:49:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.425 13:49:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:00.425 13:49:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:00.425 13:49:51 -- accel/accel.sh@41 -- # local IFS=, 00:08:00.425 13:49:51 -- accel/accel.sh@42 -- # jq -r . 00:08:00.425 [2024-07-23 13:49:51.199030] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:00.425 [2024-07-23 13:49:51.199126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901787 ] 00:08:00.425 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.425 [2024-07-23 13:49:51.319632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.425 [2024-07-23 13:49:51.417292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val= 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val= 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val=0x1 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val= 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val= 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val=0 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val='8192 bytes' 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val= 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val=software 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@23 -- # accel_module=software 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val=32 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val=32 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val=1 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val=Yes 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val= 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:00.684 13:49:51 -- accel/accel.sh@21 -- # val= 00:08:00.684 13:49:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # IFS=: 00:08:00.684 13:49:51 -- accel/accel.sh@20 -- # read -r var val 00:08:01.621 13:49:52 -- accel/accel.sh@21 -- # val= 00:08:01.621 13:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # IFS=: 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # read -r var val 00:08:01.621 13:49:52 -- accel/accel.sh@21 -- # val= 00:08:01.621 13:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # IFS=: 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # read -r var val 00:08:01.621 13:49:52 -- accel/accel.sh@21 -- # val= 00:08:01.621 13:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # IFS=: 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # read -r var val 00:08:01.621 13:49:52 -- accel/accel.sh@21 -- # val= 00:08:01.621 13:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # IFS=: 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # read -r var val 00:08:01.621 13:49:52 -- accel/accel.sh@21 -- # val= 00:08:01.621 13:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # IFS=: 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # read -r var val 00:08:01.621 13:49:52 -- accel/accel.sh@21 -- # val= 00:08:01.621 13:49:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # IFS=: 00:08:01.621 13:49:52 -- accel/accel.sh@20 -- # read -r var val 00:08:01.621 13:49:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:01.621 13:49:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:01.621 13:49:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.621 00:08:01.621 real 0m2.911s 00:08:01.621 user 0m2.550s 00:08:01.621 sys 0m0.363s 00:08:01.621 13:49:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.621 13:49:52 -- common/autotest_common.sh@10 -- # set +x 00:08:01.621 ************************************ 00:08:01.621 END TEST accel_copy_crc32c_C2 00:08:01.621 ************************************ 00:08:01.880 13:49:52 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:01.880 13:49:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:01.880 13:49:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.880 13:49:52 -- common/autotest_common.sh@10 -- # set +x 00:08:01.880 ************************************ 00:08:01.880 START TEST accel_dualcast 00:08:01.880 ************************************ 00:08:01.880 13:49:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:01.880 13:49:52 -- accel/accel.sh@16 -- # local accel_opc 00:08:01.881 13:49:52 -- accel/accel.sh@17 -- # local accel_module 00:08:01.881 13:49:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:01.881 13:49:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:01.881 13:49:52 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.881 13:49:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.881 13:49:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.881 13:49:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.881 13:49:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.881 13:49:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.881 13:49:52 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.881 13:49:52 -- accel/accel.sh@42 -- # jq -r . 00:08:01.881 [2024-07-23 13:49:52.709520] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:01.881 [2024-07-23 13:49:52.709614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902064 ] 00:08:01.881 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.881 [2024-07-23 13:49:52.831137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.139 [2024-07-23 13:49:52.934159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.517 13:49:54 -- accel/accel.sh@18 -- # out=' 00:08:03.517 SPDK Configuration: 00:08:03.517 Core mask: 0x1 00:08:03.517 00:08:03.517 Accel Perf Configuration: 00:08:03.517 Workload Type: dualcast 00:08:03.517 Transfer size: 4096 bytes 00:08:03.517 Vector count 1 00:08:03.517 Module: software 00:08:03.517 Queue depth: 32 00:08:03.517 Allocate depth: 32 00:08:03.517 # threads/core: 1 00:08:03.517 Run time: 1 seconds 00:08:03.517 Verify: Yes 00:08:03.517 00:08:03.517 Running for 1 seconds... 00:08:03.517 00:08:03.517 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:03.517 ------------------------------------------------------------------------------------ 00:08:03.517 0,0 410016/s 1601 MiB/s 0 0 00:08:03.517 ==================================================================================== 00:08:03.517 Total 410016/s 1601 MiB/s 0 0' 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.517 13:49:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:03.517 13:49:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:03.517 13:49:54 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.517 13:49:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:03.517 13:49:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.517 13:49:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.517 13:49:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:03.517 13:49:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:03.517 13:49:54 -- accel/accel.sh@41 -- # local IFS=, 00:08:03.517 13:49:54 -- accel/accel.sh@42 -- # jq -r . 00:08:03.517 [2024-07-23 13:49:54.169193] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:03.517 [2024-07-23 13:49:54.169291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902287 ] 00:08:03.517 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.517 [2024-07-23 13:49:54.290496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.517 [2024-07-23 13:49:54.387301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.517 13:49:54 -- accel/accel.sh@21 -- # val= 00:08:03.517 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.517 13:49:54 -- accel/accel.sh@21 -- # val= 00:08:03.517 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.517 13:49:54 -- accel/accel.sh@21 -- # val=0x1 00:08:03.517 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.517 13:49:54 -- accel/accel.sh@21 -- # val= 00:08:03.517 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.517 13:49:54 -- accel/accel.sh@21 -- # val= 00:08:03.517 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.517 13:49:54 -- accel/accel.sh@21 -- # val=dualcast 00:08:03.517 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.517 13:49:54 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.517 13:49:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:03.517 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.517 13:49:54 -- accel/accel.sh@21 -- # val= 00:08:03.517 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.517 13:49:54 -- accel/accel.sh@21 -- # val=software 00:08:03.517 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.517 13:49:54 -- accel/accel.sh@23 -- # accel_module=software 00:08:03.517 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.518 13:49:54 -- accel/accel.sh@21 -- # val=32 00:08:03.518 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.518 13:49:54 -- accel/accel.sh@21 -- # val=32 00:08:03.518 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.518 13:49:54 -- accel/accel.sh@21 -- # val=1 00:08:03.518 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.518 13:49:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:03.518 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.518 13:49:54 -- accel/accel.sh@21 -- # val=Yes 00:08:03.518 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.518 13:49:54 -- accel/accel.sh@21 -- # val= 00:08:03.518 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:03.518 13:49:54 -- accel/accel.sh@21 -- # val= 00:08:03.518 13:49:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # IFS=: 00:08:03.518 13:49:54 -- accel/accel.sh@20 -- # read -r var val 00:08:04.896 13:49:55 -- accel/accel.sh@21 -- # val= 00:08:04.896 13:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.896 13:49:55 -- accel/accel.sh@20 -- # IFS=: 00:08:04.896 13:49:55 -- accel/accel.sh@20 -- # read -r var val 00:08:04.896 13:49:55 -- accel/accel.sh@21 -- # val= 00:08:04.896 13:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.897 13:49:55 -- accel/accel.sh@20 -- # IFS=: 00:08:04.897 13:49:55 -- accel/accel.sh@20 -- # read -r var val 00:08:04.897 13:49:55 -- accel/accel.sh@21 -- # val= 00:08:04.897 13:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.897 13:49:55 -- accel/accel.sh@20 -- # IFS=: 00:08:04.897 13:49:55 -- accel/accel.sh@20 -- # read -r var val 00:08:04.897 13:49:55 -- accel/accel.sh@21 -- # val= 00:08:04.897 13:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.897 13:49:55 -- accel/accel.sh@20 -- # IFS=: 00:08:04.897 13:49:55 -- accel/accel.sh@20 -- # read -r var val 00:08:04.897 13:49:55 -- accel/accel.sh@21 -- # val= 00:08:04.897 13:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.897 13:49:55 -- accel/accel.sh@20 -- # IFS=: 00:08:04.897 13:49:55 -- accel/accel.sh@20 -- # read -r var val 00:08:04.897 13:49:55 -- accel/accel.sh@21 -- # val= 00:08:04.897 13:49:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.897 13:49:55 -- accel/accel.sh@20 -- # IFS=: 00:08:04.897 13:49:55 -- accel/accel.sh@20 -- # read -r var val 00:08:04.897 13:49:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:04.897 13:49:55 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:04.897 13:49:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.897 00:08:04.897 real 0m2.916s 00:08:04.897 user 0m2.547s 00:08:04.897 sys 0m0.371s 00:08:04.897 13:49:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.897 13:49:55 -- common/autotest_common.sh@10 -- # set +x 00:08:04.897 ************************************ 00:08:04.897 END TEST accel_dualcast 00:08:04.897 ************************************ 00:08:04.897 13:49:55 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:04.897 13:49:55 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:04.897 13:49:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.897 13:49:55 -- common/autotest_common.sh@10 -- # set +x 00:08:04.897 ************************************ 00:08:04.897 START TEST accel_compare 00:08:04.897 ************************************ 00:08:04.897 13:49:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:04.897 13:49:55 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.897 13:49:55 -- accel/accel.sh@17 -- # local accel_module 00:08:04.897 13:49:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:04.897 13:49:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:04.897 13:49:55 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.897 13:49:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.897 13:49:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.897 13:49:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.897 13:49:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.897 13:49:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.897 13:49:55 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.897 13:49:55 -- accel/accel.sh@42 -- # jq -r . 00:08:04.897 [2024-07-23 13:49:55.669408] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:04.897 [2024-07-23 13:49:55.669508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902519 ] 00:08:04.897 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.897 [2024-07-23 13:49:55.787774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.897 [2024-07-23 13:49:55.880912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.277 13:49:57 -- accel/accel.sh@18 -- # out=' 00:08:06.277 SPDK Configuration: 00:08:06.277 Core mask: 0x1 00:08:06.277 00:08:06.277 Accel Perf Configuration: 00:08:06.277 Workload Type: compare 00:08:06.277 Transfer size: 4096 bytes 00:08:06.277 Vector count 1 00:08:06.277 Module: software 00:08:06.277 Queue depth: 32 00:08:06.277 Allocate depth: 32 00:08:06.277 # threads/core: 1 00:08:06.277 Run time: 1 seconds 00:08:06.277 Verify: Yes 00:08:06.277 00:08:06.277 Running for 1 seconds... 00:08:06.277 00:08:06.277 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:06.277 ------------------------------------------------------------------------------------ 00:08:06.277 0,0 511680/s 1998 MiB/s 0 0 00:08:06.277 ==================================================================================== 00:08:06.277 Total 511680/s 1998 MiB/s 0 0' 00:08:06.277 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.277 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.277 13:49:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:06.277 13:49:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:06.277 13:49:57 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.277 13:49:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.277 13:49:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.277 13:49:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.277 13:49:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.277 13:49:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.277 13:49:57 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.277 13:49:57 -- accel/accel.sh@42 -- # jq -r . 00:08:06.277 [2024-07-23 13:49:57.103254] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:06.277 [2024-07-23 13:49:57.103332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902703 ] 00:08:06.277 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.277 [2024-07-23 13:49:57.220395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.537 [2024-07-23 13:49:57.313768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val= 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val= 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val=0x1 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val= 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val= 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val=compare 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.537 13:49:57 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val= 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val=software 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.537 13:49:57 -- accel/accel.sh@23 -- # accel_module=software 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val=32 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val=32 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.537 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.537 13:49:57 -- accel/accel.sh@21 -- # val=1 00:08:06.537 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.538 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.538 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.538 13:49:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:06.538 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.538 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.538 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.538 13:49:57 -- accel/accel.sh@21 -- # val=Yes 00:08:06.538 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.538 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.538 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.538 13:49:57 -- accel/accel.sh@21 -- # val= 00:08:06.538 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.538 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.538 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:06.538 13:49:57 -- accel/accel.sh@21 -- # val= 00:08:06.538 13:49:57 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.538 13:49:57 -- accel/accel.sh@20 -- # IFS=: 00:08:06.538 13:49:57 -- accel/accel.sh@20 -- # read -r var val 00:08:07.915 13:49:58 -- accel/accel.sh@21 -- # val= 00:08:07.915 13:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # IFS=: 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # read -r var val 00:08:07.915 13:49:58 -- accel/accel.sh@21 -- # val= 00:08:07.915 13:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # IFS=: 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # read -r var val 00:08:07.915 13:49:58 -- accel/accel.sh@21 -- # val= 00:08:07.915 13:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # IFS=: 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # read -r var val 00:08:07.915 13:49:58 -- accel/accel.sh@21 -- # val= 00:08:07.915 13:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # IFS=: 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # read -r var val 00:08:07.915 13:49:58 -- accel/accel.sh@21 -- # val= 00:08:07.915 13:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # IFS=: 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # read -r var val 00:08:07.915 13:49:58 -- accel/accel.sh@21 -- # val= 00:08:07.915 13:49:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # IFS=: 00:08:07.915 13:49:58 -- accel/accel.sh@20 -- # read -r var val 00:08:07.915 13:49:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:07.915 13:49:58 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:07.915 13:49:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.915 00:08:07.915 real 0m2.865s 00:08:07.915 user 0m2.528s 00:08:07.915 sys 0m0.340s 00:08:07.915 13:49:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.915 13:49:58 -- common/autotest_common.sh@10 -- # set +x 00:08:07.915 ************************************ 00:08:07.915 END TEST accel_compare 00:08:07.915 ************************************ 00:08:07.915 13:49:58 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:07.915 13:49:58 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:07.915 13:49:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.915 13:49:58 -- common/autotest_common.sh@10 -- # set +x 00:08:07.915 ************************************ 00:08:07.915 START TEST accel_xor 00:08:07.915 ************************************ 00:08:07.915 13:49:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:07.915 13:49:58 -- accel/accel.sh@16 -- # local accel_opc 00:08:07.915 13:49:58 -- accel/accel.sh@17 -- # local accel_module 00:08:07.915 13:49:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:07.915 13:49:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:07.915 13:49:58 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.915 13:49:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.915 13:49:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.915 13:49:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.915 13:49:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.915 13:49:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.915 13:49:58 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.915 13:49:58 -- accel/accel.sh@42 -- # jq -r . 00:08:07.915 [2024-07-23 13:49:58.575857] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:07.915 [2024-07-23 13:49:58.575929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902905 ] 00:08:07.915 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.915 [2024-07-23 13:49:58.693231] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.915 [2024-07-23 13:49:58.790094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.296 13:50:00 -- accel/accel.sh@18 -- # out=' 00:08:09.296 SPDK Configuration: 00:08:09.296 Core mask: 0x1 00:08:09.296 00:08:09.296 Accel Perf Configuration: 00:08:09.296 Workload Type: xor 00:08:09.296 Source buffers: 2 00:08:09.296 Transfer size: 4096 bytes 00:08:09.296 Vector count 1 00:08:09.296 Module: software 00:08:09.296 Queue depth: 32 00:08:09.296 Allocate depth: 32 00:08:09.296 # threads/core: 1 00:08:09.296 Run time: 1 seconds 00:08:09.296 Verify: Yes 00:08:09.296 00:08:09.296 Running for 1 seconds... 00:08:09.296 00:08:09.296 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:09.296 ------------------------------------------------------------------------------------ 00:08:09.296 0,0 467136/s 1824 MiB/s 0 0 00:08:09.296 ==================================================================================== 00:08:09.296 Total 467136/s 1824 MiB/s 0 0' 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:09.296 13:50:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:09.296 13:50:00 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.296 13:50:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.296 13:50:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.296 13:50:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.296 13:50:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.296 13:50:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.296 13:50:00 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.296 13:50:00 -- accel/accel.sh@42 -- # jq -r . 00:08:09.296 [2024-07-23 13:50:00.024229] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:09.296 [2024-07-23 13:50:00.024323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903083 ] 00:08:09.296 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.296 [2024-07-23 13:50:00.146542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.296 [2024-07-23 13:50:00.243652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val= 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val= 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val=0x1 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val= 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val= 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val=xor 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val=2 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val= 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val=software 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@23 -- # accel_module=software 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val=32 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val=32 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val=1 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val=Yes 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val= 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:09.296 13:50:00 -- accel/accel.sh@21 -- # val= 00:08:09.296 13:50:00 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # IFS=: 00:08:09.296 13:50:00 -- accel/accel.sh@20 -- # read -r var val 00:08:10.676 13:50:01 -- accel/accel.sh@21 -- # val= 00:08:10.676 13:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.676 13:50:01 -- accel/accel.sh@21 -- # val= 00:08:10.676 13:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.676 13:50:01 -- accel/accel.sh@21 -- # val= 00:08:10.676 13:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.676 13:50:01 -- accel/accel.sh@21 -- # val= 00:08:10.676 13:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.676 13:50:01 -- accel/accel.sh@21 -- # val= 00:08:10.676 13:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.676 13:50:01 -- accel/accel.sh@21 -- # val= 00:08:10.676 13:50:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # IFS=: 00:08:10.676 13:50:01 -- accel/accel.sh@20 -- # read -r var val 00:08:10.676 13:50:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:10.676 13:50:01 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:10.676 13:50:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.676 00:08:10.676 real 0m2.899s 00:08:10.676 user 0m2.546s 00:08:10.676 sys 0m0.356s 00:08:10.676 13:50:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.676 13:50:01 -- common/autotest_common.sh@10 -- # set +x 00:08:10.676 ************************************ 00:08:10.676 END TEST accel_xor 00:08:10.676 ************************************ 00:08:10.676 13:50:01 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:10.676 13:50:01 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:10.676 13:50:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.676 13:50:01 -- common/autotest_common.sh@10 -- # set +x 00:08:10.676 ************************************ 00:08:10.676 START TEST accel_xor 00:08:10.676 ************************************ 00:08:10.676 13:50:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:08:10.676 13:50:01 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.676 13:50:01 -- accel/accel.sh@17 -- # local accel_module 00:08:10.676 13:50:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:08:10.676 13:50:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:10.676 13:50:01 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.676 13:50:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.676 13:50:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.676 13:50:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.676 13:50:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.676 13:50:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.676 13:50:01 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.676 13:50:01 -- accel/accel.sh@42 -- # jq -r . 00:08:10.676 [2024-07-23 13:50:01.523689] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:10.676 [2024-07-23 13:50:01.523766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903281 ] 00:08:10.676 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.676 [2024-07-23 13:50:01.643027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.935 [2024-07-23 13:50:01.739678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.349 13:50:02 -- accel/accel.sh@18 -- # out=' 00:08:12.349 SPDK Configuration: 00:08:12.349 Core mask: 0x1 00:08:12.349 00:08:12.349 Accel Perf Configuration: 00:08:12.349 Workload Type: xor 00:08:12.349 Source buffers: 3 00:08:12.349 Transfer size: 4096 bytes 00:08:12.349 Vector count 1 00:08:12.349 Module: software 00:08:12.349 Queue depth: 32 00:08:12.349 Allocate depth: 32 00:08:12.349 # threads/core: 1 00:08:12.349 Run time: 1 seconds 00:08:12.349 Verify: Yes 00:08:12.349 00:08:12.349 Running for 1 seconds... 00:08:12.349 00:08:12.349 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:12.349 ------------------------------------------------------------------------------------ 00:08:12.349 0,0 439552/s 1717 MiB/s 0 0 00:08:12.349 ==================================================================================== 00:08:12.349 Total 439552/s 1717 MiB/s 0 0' 00:08:12.349 13:50:02 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:02 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:12.349 13:50:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:12.349 13:50:02 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.349 13:50:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.349 13:50:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.349 13:50:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.349 13:50:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.349 13:50:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.349 13:50:02 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.349 13:50:02 -- accel/accel.sh@42 -- # jq -r . 00:08:12.349 [2024-07-23 13:50:02.974836] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:12.349 [2024-07-23 13:50:02.974954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903465 ] 00:08:12.349 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.349 [2024-07-23 13:50:03.094221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.349 [2024-07-23 13:50:03.190616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val= 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val= 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val=0x1 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val= 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val= 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val=xor 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val=3 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val= 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val=software 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@23 -- # accel_module=software 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val=32 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val=32 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val=1 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val=Yes 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val= 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:12.349 13:50:03 -- accel/accel.sh@21 -- # val= 00:08:12.349 13:50:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # IFS=: 00:08:12.349 13:50:03 -- accel/accel.sh@20 -- # read -r var val 00:08:13.728 13:50:04 -- accel/accel.sh@21 -- # val= 00:08:13.728 13:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # IFS=: 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # read -r var val 00:08:13.728 13:50:04 -- accel/accel.sh@21 -- # val= 00:08:13.728 13:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # IFS=: 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # read -r var val 00:08:13.728 13:50:04 -- accel/accel.sh@21 -- # val= 00:08:13.728 13:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # IFS=: 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # read -r var val 00:08:13.728 13:50:04 -- accel/accel.sh@21 -- # val= 00:08:13.728 13:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # IFS=: 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # read -r var val 00:08:13.728 13:50:04 -- accel/accel.sh@21 -- # val= 00:08:13.728 13:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # IFS=: 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # read -r var val 00:08:13.728 13:50:04 -- accel/accel.sh@21 -- # val= 00:08:13.728 13:50:04 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # IFS=: 00:08:13.728 13:50:04 -- accel/accel.sh@20 -- # read -r var val 00:08:13.728 13:50:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:13.728 13:50:04 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:13.728 13:50:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.728 00:08:13.728 real 0m2.899s 00:08:13.728 user 0m2.547s 00:08:13.728 sys 0m0.356s 00:08:13.728 13:50:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.728 13:50:04 -- common/autotest_common.sh@10 -- # set +x 00:08:13.728 ************************************ 00:08:13.728 END TEST accel_xor 00:08:13.728 ************************************ 00:08:13.728 13:50:04 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:13.728 13:50:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:13.728 13:50:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.728 13:50:04 -- common/autotest_common.sh@10 -- # set +x 00:08:13.728 ************************************ 00:08:13.728 START TEST accel_dif_verify 00:08:13.728 ************************************ 00:08:13.728 13:50:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:08:13.728 13:50:04 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.728 13:50:04 -- accel/accel.sh@17 -- # local accel_module 00:08:13.728 13:50:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:08:13.728 13:50:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:13.728 13:50:04 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.728 13:50:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.728 13:50:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.728 13:50:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.728 13:50:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.728 13:50:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.728 13:50:04 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.728 13:50:04 -- accel/accel.sh@42 -- # jq -r . 00:08:13.728 [2024-07-23 13:50:04.470806] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:13.728 [2024-07-23 13:50:04.470878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903662 ] 00:08:13.728 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.728 [2024-07-23 13:50:04.587625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.728 [2024-07-23 13:50:04.685409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.109 13:50:05 -- accel/accel.sh@18 -- # out=' 00:08:15.109 SPDK Configuration: 00:08:15.109 Core mask: 0x1 00:08:15.109 00:08:15.109 Accel Perf Configuration: 00:08:15.109 Workload Type: dif_verify 00:08:15.109 Vector size: 4096 bytes 00:08:15.109 Transfer size: 4096 bytes 00:08:15.109 Block size: 512 bytes 00:08:15.109 Metadata size: 8 bytes 00:08:15.109 Vector count 1 00:08:15.109 Module: software 00:08:15.109 Queue depth: 32 00:08:15.109 Allocate depth: 32 00:08:15.109 # threads/core: 1 00:08:15.109 Run time: 1 seconds 00:08:15.109 Verify: No 00:08:15.109 00:08:15.109 Running for 1 seconds... 00:08:15.109 00:08:15.109 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:15.109 ------------------------------------------------------------------------------------ 00:08:15.109 0,0 147552/s 585 MiB/s 0 0 00:08:15.109 ==================================================================================== 00:08:15.109 Total 147552/s 576 MiB/s 0 0' 00:08:15.109 13:50:05 -- accel/accel.sh@20 -- # IFS=: 00:08:15.109 13:50:05 -- accel/accel.sh@20 -- # read -r var val 00:08:15.109 13:50:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:15.109 13:50:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:15.109 13:50:05 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.109 13:50:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:15.109 13:50:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.109 13:50:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.109 13:50:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:15.109 13:50:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:15.109 13:50:05 -- accel/accel.sh@41 -- # local IFS=, 00:08:15.109 13:50:05 -- accel/accel.sh@42 -- # jq -r . 00:08:15.109 [2024-07-23 13:50:05.918422] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:15.109 [2024-07-23 13:50:05.918523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903849 ] 00:08:15.109 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.109 [2024-07-23 13:50:06.039197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.368 [2024-07-23 13:50:06.136133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val= 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val= 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val=0x1 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val= 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val= 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val=dif_verify 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val= 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val=software 00:08:15.368 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.368 13:50:06 -- accel/accel.sh@23 -- # accel_module=software 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.368 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.368 13:50:06 -- accel/accel.sh@21 -- # val=32 00:08:15.369 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.369 13:50:06 -- accel/accel.sh@21 -- # val=32 00:08:15.369 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.369 13:50:06 -- accel/accel.sh@21 -- # val=1 00:08:15.369 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.369 13:50:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:15.369 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.369 13:50:06 -- accel/accel.sh@21 -- # val=No 00:08:15.369 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.369 13:50:06 -- accel/accel.sh@21 -- # val= 00:08:15.369 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:15.369 13:50:06 -- accel/accel.sh@21 -- # val= 00:08:15.369 13:50:06 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # IFS=: 00:08:15.369 13:50:06 -- accel/accel.sh@20 -- # read -r var val 00:08:16.749 13:50:07 -- accel/accel.sh@21 -- # val= 00:08:16.749 13:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # IFS=: 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # read -r var val 00:08:16.749 13:50:07 -- accel/accel.sh@21 -- # val= 00:08:16.749 13:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # IFS=: 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # read -r var val 00:08:16.749 13:50:07 -- accel/accel.sh@21 -- # val= 00:08:16.749 13:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # IFS=: 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # read -r var val 00:08:16.749 13:50:07 -- accel/accel.sh@21 -- # val= 00:08:16.749 13:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # IFS=: 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # read -r var val 00:08:16.749 13:50:07 -- accel/accel.sh@21 -- # val= 00:08:16.749 13:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # IFS=: 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # read -r var val 00:08:16.749 13:50:07 -- accel/accel.sh@21 -- # val= 00:08:16.749 13:50:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # IFS=: 00:08:16.749 13:50:07 -- accel/accel.sh@20 -- # read -r var val 00:08:16.749 13:50:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:16.749 13:50:07 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:08:16.749 13:50:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.749 00:08:16.749 real 0m2.897s 00:08:16.749 user 0m2.537s 00:08:16.749 sys 0m0.364s 00:08:16.749 13:50:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.749 13:50:07 -- common/autotest_common.sh@10 -- # set +x 00:08:16.749 ************************************ 00:08:16.749 END TEST accel_dif_verify 00:08:16.749 ************************************ 00:08:16.749 13:50:07 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:16.749 13:50:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:16.749 13:50:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.749 13:50:07 -- common/autotest_common.sh@10 -- # set +x 00:08:16.749 ************************************ 00:08:16.749 START TEST accel_dif_generate 00:08:16.749 ************************************ 00:08:16.749 13:50:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:08:16.749 13:50:07 -- accel/accel.sh@16 -- # local accel_opc 00:08:16.749 13:50:07 -- accel/accel.sh@17 -- # local accel_module 00:08:16.749 13:50:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:08:16.749 13:50:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:16.749 13:50:07 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.749 13:50:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:16.749 13:50:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.749 13:50:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.749 13:50:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:16.749 13:50:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:16.749 13:50:07 -- accel/accel.sh@41 -- # local IFS=, 00:08:16.749 13:50:07 -- accel/accel.sh@42 -- # jq -r . 00:08:16.749 [2024-07-23 13:50:07.408352] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:16.749 [2024-07-23 13:50:07.408412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904044 ] 00:08:16.749 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.749 [2024-07-23 13:50:07.511127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.749 [2024-07-23 13:50:07.608456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.128 13:50:08 -- accel/accel.sh@18 -- # out=' 00:08:18.128 SPDK Configuration: 00:08:18.128 Core mask: 0x1 00:08:18.128 00:08:18.128 Accel Perf Configuration: 00:08:18.128 Workload Type: dif_generate 00:08:18.128 Vector size: 4096 bytes 00:08:18.128 Transfer size: 4096 bytes 00:08:18.128 Block size: 512 bytes 00:08:18.128 Metadata size: 8 bytes 00:08:18.128 Vector count 1 00:08:18.128 Module: software 00:08:18.128 Queue depth: 32 00:08:18.128 Allocate depth: 32 00:08:18.128 # threads/core: 1 00:08:18.128 Run time: 1 seconds 00:08:18.128 Verify: No 00:08:18.128 00:08:18.128 Running for 1 seconds... 00:08:18.128 00:08:18.128 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:18.128 ------------------------------------------------------------------------------------ 00:08:18.128 0,0 180416/s 715 MiB/s 0 0 00:08:18.128 ==================================================================================== 00:08:18.128 Total 180416/s 704 MiB/s 0 0' 00:08:18.128 13:50:08 -- accel/accel.sh@20 -- # IFS=: 00:08:18.128 13:50:08 -- accel/accel.sh@20 -- # read -r var val 00:08:18.128 13:50:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:18.128 13:50:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:18.128 13:50:08 -- accel/accel.sh@12 -- # build_accel_config 00:08:18.128 13:50:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:18.128 13:50:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.128 13:50:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.128 13:50:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:18.128 13:50:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:18.128 13:50:08 -- accel/accel.sh@41 -- # local IFS=, 00:08:18.128 13:50:08 -- accel/accel.sh@42 -- # jq -r . 00:08:18.128 [2024-07-23 13:50:08.837262] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:18.128 [2024-07-23 13:50:08.837359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904232 ] 00:08:18.128 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.128 [2024-07-23 13:50:08.956768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.128 [2024-07-23 13:50:09.051678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.128 13:50:09 -- accel/accel.sh@21 -- # val= 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val= 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val=0x1 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val= 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val= 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val=dif_generate 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val= 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val=software 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@23 -- # accel_module=software 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val=32 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val=32 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val=1 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val=No 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val= 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:18.129 13:50:09 -- accel/accel.sh@21 -- # val= 00:08:18.129 13:50:09 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # IFS=: 00:08:18.129 13:50:09 -- accel/accel.sh@20 -- # read -r var val 00:08:19.509 13:50:10 -- accel/accel.sh@21 -- # val= 00:08:19.509 13:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.509 13:50:10 -- accel/accel.sh@21 -- # val= 00:08:19.509 13:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.509 13:50:10 -- accel/accel.sh@21 -- # val= 00:08:19.509 13:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.509 13:50:10 -- accel/accel.sh@21 -- # val= 00:08:19.509 13:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.509 13:50:10 -- accel/accel.sh@21 -- # val= 00:08:19.509 13:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.509 13:50:10 -- accel/accel.sh@21 -- # val= 00:08:19.509 13:50:10 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # IFS=: 00:08:19.509 13:50:10 -- accel/accel.sh@20 -- # read -r var val 00:08:19.509 13:50:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:19.509 13:50:10 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:08:19.509 13:50:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.509 00:08:19.509 real 0m2.863s 00:08:19.509 user 0m2.515s 00:08:19.509 sys 0m0.354s 00:08:19.509 13:50:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.509 13:50:10 -- common/autotest_common.sh@10 -- # set +x 00:08:19.509 ************************************ 00:08:19.509 END TEST accel_dif_generate 00:08:19.509 ************************************ 00:08:19.509 13:50:10 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:19.509 13:50:10 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:19.509 13:50:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.509 13:50:10 -- common/autotest_common.sh@10 -- # set +x 00:08:19.509 ************************************ 00:08:19.509 START TEST accel_dif_generate_copy 00:08:19.509 ************************************ 00:08:19.509 13:50:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:08:19.509 13:50:10 -- accel/accel.sh@16 -- # local accel_opc 00:08:19.509 13:50:10 -- accel/accel.sh@17 -- # local accel_module 00:08:19.509 13:50:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:08:19.509 13:50:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:19.509 13:50:10 -- accel/accel.sh@12 -- # build_accel_config 00:08:19.509 13:50:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:19.509 13:50:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.509 13:50:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.509 13:50:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:19.509 13:50:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:19.509 13:50:10 -- accel/accel.sh@41 -- # local IFS=, 00:08:19.509 13:50:10 -- accel/accel.sh@42 -- # jq -r . 00:08:19.509 [2024-07-23 13:50:10.321492] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:19.509 [2024-07-23 13:50:10.321575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904493 ] 00:08:19.509 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.509 [2024-07-23 13:50:10.439089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.769 [2024-07-23 13:50:10.533067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.707 13:50:11 -- accel/accel.sh@18 -- # out=' 00:08:20.707 SPDK Configuration: 00:08:20.707 Core mask: 0x1 00:08:20.707 00:08:20.707 Accel Perf Configuration: 00:08:20.707 Workload Type: dif_generate_copy 00:08:20.707 Vector size: 4096 bytes 00:08:20.707 Transfer size: 4096 bytes 00:08:20.707 Vector count 1 00:08:20.707 Module: software 00:08:20.707 Queue depth: 32 00:08:20.707 Allocate depth: 32 00:08:20.707 # threads/core: 1 00:08:20.707 Run time: 1 seconds 00:08:20.707 Verify: No 00:08:20.707 00:08:20.707 Running for 1 seconds... 00:08:20.707 00:08:20.707 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:20.707 ------------------------------------------------------------------------------------ 00:08:20.707 0,0 137120/s 543 MiB/s 0 0 00:08:20.707 ==================================================================================== 00:08:20.707 Total 137120/s 535 MiB/s 0 0' 00:08:20.707 13:50:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:20.707 13:50:11 -- accel/accel.sh@20 -- # IFS=: 00:08:20.707 13:50:11 -- accel/accel.sh@20 -- # read -r var val 00:08:20.967 13:50:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:20.967 13:50:11 -- accel/accel.sh@12 -- # build_accel_config 00:08:20.967 13:50:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:20.967 13:50:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.967 13:50:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.967 13:50:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:20.967 13:50:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:20.967 13:50:11 -- accel/accel.sh@41 -- # local IFS=, 00:08:20.967 13:50:11 -- accel/accel.sh@42 -- # jq -r . 00:08:20.967 [2024-07-23 13:50:11.741677] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:20.967 [2024-07-23 13:50:11.741755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904702 ] 00:08:20.967 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.967 [2024-07-23 13:50:11.860217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.967 [2024-07-23 13:50:11.956244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.226 13:50:12 -- accel/accel.sh@21 -- # val= 00:08:21.226 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.226 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.226 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.226 13:50:12 -- accel/accel.sh@21 -- # val= 00:08:21.226 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.226 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.226 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val=0x1 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val= 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val= 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val= 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val=software 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@23 -- # accel_module=software 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val=32 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val=32 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val=1 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val=No 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val= 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:21.227 13:50:12 -- accel/accel.sh@21 -- # val= 00:08:21.227 13:50:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # IFS=: 00:08:21.227 13:50:12 -- accel/accel.sh@20 -- # read -r var val 00:08:22.165 13:50:13 -- accel/accel.sh@21 -- # val= 00:08:22.165 13:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.165 13:50:13 -- accel/accel.sh@20 -- # IFS=: 00:08:22.165 13:50:13 -- accel/accel.sh@20 -- # read -r var val 00:08:22.165 13:50:13 -- accel/accel.sh@21 -- # val= 00:08:22.165 13:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.165 13:50:13 -- accel/accel.sh@20 -- # IFS=: 00:08:22.165 13:50:13 -- accel/accel.sh@20 -- # read -r var val 00:08:22.165 13:50:13 -- accel/accel.sh@21 -- # val= 00:08:22.165 13:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.165 13:50:13 -- accel/accel.sh@20 -- # IFS=: 00:08:22.165 13:50:13 -- accel/accel.sh@20 -- # read -r var val 00:08:22.165 13:50:13 -- accel/accel.sh@21 -- # val= 00:08:22.166 13:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.166 13:50:13 -- accel/accel.sh@20 -- # IFS=: 00:08:22.166 13:50:13 -- accel/accel.sh@20 -- # read -r var val 00:08:22.166 13:50:13 -- accel/accel.sh@21 -- # val= 00:08:22.166 13:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.166 13:50:13 -- accel/accel.sh@20 -- # IFS=: 00:08:22.166 13:50:13 -- accel/accel.sh@20 -- # read -r var val 00:08:22.166 13:50:13 -- accel/accel.sh@21 -- # val= 00:08:22.166 13:50:13 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.166 13:50:13 -- accel/accel.sh@20 -- # IFS=: 00:08:22.166 13:50:13 -- accel/accel.sh@20 -- # read -r var val 00:08:22.166 13:50:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:22.166 13:50:13 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:08:22.166 13:50:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.166 00:08:22.166 real 0m2.852s 00:08:22.166 user 0m2.518s 00:08:22.166 sys 0m0.337s 00:08:22.166 13:50:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.166 13:50:13 -- common/autotest_common.sh@10 -- # set +x 00:08:22.166 ************************************ 00:08:22.166 END TEST accel_dif_generate_copy 00:08:22.166 ************************************ 00:08:22.425 13:50:13 -- accel/accel.sh@107 -- # [[ y == y ]] 00:08:22.425 13:50:13 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:22.425 13:50:13 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:08:22.425 13:50:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.425 13:50:13 -- common/autotest_common.sh@10 -- # set +x 00:08:22.425 ************************************ 00:08:22.425 START TEST accel_comp 00:08:22.425 ************************************ 00:08:22.425 13:50:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:22.425 13:50:13 -- accel/accel.sh@16 -- # local accel_opc 00:08:22.425 13:50:13 -- accel/accel.sh@17 -- # local accel_module 00:08:22.425 13:50:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:22.425 13:50:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:22.425 13:50:13 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.425 13:50:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:22.425 13:50:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.425 13:50:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.425 13:50:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:22.425 13:50:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:22.425 13:50:13 -- accel/accel.sh@41 -- # local IFS=, 00:08:22.425 13:50:13 -- accel/accel.sh@42 -- # jq -r . 00:08:22.425 [2024-07-23 13:50:13.223657] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:22.425 [2024-07-23 13:50:13.223753] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904972 ] 00:08:22.425 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.425 [2024-07-23 13:50:13.343338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.425 [2024-07-23 13:50:13.440084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.803 13:50:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:23.803 00:08:23.803 SPDK Configuration: 00:08:23.803 Core mask: 0x1 00:08:23.803 00:08:23.803 Accel Perf Configuration: 00:08:23.803 Workload Type: compress 00:08:23.803 Transfer size: 4096 bytes 00:08:23.803 Vector count 1 00:08:23.803 Module: software 00:08:23.803 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:23.803 Queue depth: 32 00:08:23.803 Allocate depth: 32 00:08:23.803 # threads/core: 1 00:08:23.803 Run time: 1 seconds 00:08:23.803 Verify: No 00:08:23.803 00:08:23.803 Running for 1 seconds... 00:08:23.803 00:08:23.803 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:23.803 ------------------------------------------------------------------------------------ 00:08:23.803 0,0 44608/s 185 MiB/s 0 0 00:08:23.803 ==================================================================================== 00:08:23.803 Total 44608/s 174 MiB/s 0 0' 00:08:23.803 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:23.803 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:23.803 13:50:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:23.803 13:50:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:23.803 13:50:14 -- accel/accel.sh@12 -- # build_accel_config 00:08:23.803 13:50:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:23.803 13:50:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.803 13:50:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.803 13:50:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:23.803 13:50:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:23.803 13:50:14 -- accel/accel.sh@41 -- # local IFS=, 00:08:23.803 13:50:14 -- accel/accel.sh@42 -- # jq -r . 00:08:23.803 [2024-07-23 13:50:14.673831] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:23.803 [2024-07-23 13:50:14.673925] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905152 ] 00:08:23.803 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.803 [2024-07-23 13:50:14.794642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.063 [2024-07-23 13:50:14.891423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val= 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val= 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val= 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val=0x1 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val= 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val= 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val=compress 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@24 -- # accel_opc=compress 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val= 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val=software 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@23 -- # accel_module=software 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val=32 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val=32 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val=1 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val=No 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val= 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:24.063 13:50:14 -- accel/accel.sh@21 -- # val= 00:08:24.063 13:50:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # IFS=: 00:08:24.063 13:50:14 -- accel/accel.sh@20 -- # read -r var val 00:08:25.443 13:50:16 -- accel/accel.sh@21 -- # val= 00:08:25.443 13:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.443 13:50:16 -- accel/accel.sh@21 -- # val= 00:08:25.443 13:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.443 13:50:16 -- accel/accel.sh@21 -- # val= 00:08:25.443 13:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.443 13:50:16 -- accel/accel.sh@21 -- # val= 00:08:25.443 13:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.443 13:50:16 -- accel/accel.sh@21 -- # val= 00:08:25.443 13:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.443 13:50:16 -- accel/accel.sh@21 -- # val= 00:08:25.443 13:50:16 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # IFS=: 00:08:25.443 13:50:16 -- accel/accel.sh@20 -- # read -r var val 00:08:25.443 13:50:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:25.443 13:50:16 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:08:25.443 13:50:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.443 00:08:25.443 real 0m2.910s 00:08:25.443 user 0m2.542s 00:08:25.443 sys 0m0.372s 00:08:25.443 13:50:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.443 13:50:16 -- common/autotest_common.sh@10 -- # set +x 00:08:25.443 ************************************ 00:08:25.443 END TEST accel_comp 00:08:25.443 ************************************ 00:08:25.443 13:50:16 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:25.443 13:50:16 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:25.443 13:50:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.443 13:50:16 -- common/autotest_common.sh@10 -- # set +x 00:08:25.443 ************************************ 00:08:25.443 START TEST accel_decomp 00:08:25.443 ************************************ 00:08:25.443 13:50:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:25.443 13:50:16 -- accel/accel.sh@16 -- # local accel_opc 00:08:25.443 13:50:16 -- accel/accel.sh@17 -- # local accel_module 00:08:25.443 13:50:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:25.443 13:50:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:25.443 13:50:16 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.443 13:50:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:25.443 13:50:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.443 13:50:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.443 13:50:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:25.443 13:50:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:25.443 13:50:16 -- accel/accel.sh@41 -- # local IFS=, 00:08:25.443 13:50:16 -- accel/accel.sh@42 -- # jq -r . 00:08:25.443 [2024-07-23 13:50:16.177015] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:25.443 [2024-07-23 13:50:16.177107] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905353 ] 00:08:25.443 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.443 [2024-07-23 13:50:16.296990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.443 [2024-07-23 13:50:16.393696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.822 13:50:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:26.822 00:08:26.822 SPDK Configuration: 00:08:26.822 Core mask: 0x1 00:08:26.822 00:08:26.822 Accel Perf Configuration: 00:08:26.822 Workload Type: decompress 00:08:26.822 Transfer size: 4096 bytes 00:08:26.822 Vector count 1 00:08:26.822 Module: software 00:08:26.822 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:26.822 Queue depth: 32 00:08:26.822 Allocate depth: 32 00:08:26.822 # threads/core: 1 00:08:26.822 Run time: 1 seconds 00:08:26.822 Verify: Yes 00:08:26.822 00:08:26.822 Running for 1 seconds... 00:08:26.822 00:08:26.822 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:26.822 ------------------------------------------------------------------------------------ 00:08:26.822 0,0 61600/s 113 MiB/s 0 0 00:08:26.822 ==================================================================================== 00:08:26.822 Total 61600/s 240 MiB/s 0 0' 00:08:26.822 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:26.822 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:26.822 13:50:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:26.822 13:50:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:08:26.822 13:50:17 -- accel/accel.sh@12 -- # build_accel_config 00:08:26.822 13:50:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:26.822 13:50:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.822 13:50:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.822 13:50:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:26.822 13:50:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:26.822 13:50:17 -- accel/accel.sh@41 -- # local IFS=, 00:08:26.822 13:50:17 -- accel/accel.sh@42 -- # jq -r . 00:08:26.823 [2024-07-23 13:50:17.630021] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:26.823 [2024-07-23 13:50:17.630116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905534 ] 00:08:26.823 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.823 [2024-07-23 13:50:17.749491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.082 [2024-07-23 13:50:17.846428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val= 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val= 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val= 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val=0x1 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val= 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val= 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val=decompress 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val= 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val=software 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@23 -- # accel_module=software 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val=32 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val=32 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val=1 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val=Yes 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val= 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:27.082 13:50:17 -- accel/accel.sh@21 -- # val= 00:08:27.082 13:50:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # IFS=: 00:08:27.082 13:50:17 -- accel/accel.sh@20 -- # read -r var val 00:08:28.460 13:50:19 -- accel/accel.sh@21 -- # val= 00:08:28.460 13:50:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # IFS=: 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # read -r var val 00:08:28.460 13:50:19 -- accel/accel.sh@21 -- # val= 00:08:28.460 13:50:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # IFS=: 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # read -r var val 00:08:28.460 13:50:19 -- accel/accel.sh@21 -- # val= 00:08:28.460 13:50:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # IFS=: 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # read -r var val 00:08:28.460 13:50:19 -- accel/accel.sh@21 -- # val= 00:08:28.460 13:50:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # IFS=: 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # read -r var val 00:08:28.460 13:50:19 -- accel/accel.sh@21 -- # val= 00:08:28.460 13:50:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # IFS=: 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # read -r var val 00:08:28.460 13:50:19 -- accel/accel.sh@21 -- # val= 00:08:28.460 13:50:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # IFS=: 00:08:28.460 13:50:19 -- accel/accel.sh@20 -- # read -r var val 00:08:28.460 13:50:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:28.460 13:50:19 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:28.460 13:50:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.460 00:08:28.460 real 0m2.911s 00:08:28.460 user 0m2.553s 00:08:28.460 sys 0m0.364s 00:08:28.460 13:50:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.460 13:50:19 -- common/autotest_common.sh@10 -- # set +x 00:08:28.460 ************************************ 00:08:28.460 END TEST accel_decomp 00:08:28.460 ************************************ 00:08:28.460 13:50:19 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:28.460 13:50:19 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:28.460 13:50:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.460 13:50:19 -- common/autotest_common.sh@10 -- # set +x 00:08:28.460 ************************************ 00:08:28.460 START TEST accel_decmop_full 00:08:28.460 ************************************ 00:08:28.460 13:50:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:28.460 13:50:19 -- accel/accel.sh@16 -- # local accel_opc 00:08:28.460 13:50:19 -- accel/accel.sh@17 -- # local accel_module 00:08:28.460 13:50:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:28.460 13:50:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:28.460 13:50:19 -- accel/accel.sh@12 -- # build_accel_config 00:08:28.460 13:50:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:28.460 13:50:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.460 13:50:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.460 13:50:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:28.460 13:50:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:28.460 13:50:19 -- accel/accel.sh@41 -- # local IFS=, 00:08:28.460 13:50:19 -- accel/accel.sh@42 -- # jq -r . 00:08:28.460 [2024-07-23 13:50:19.137036] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:28.460 [2024-07-23 13:50:19.137126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905735 ] 00:08:28.460 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.460 [2024-07-23 13:50:19.258855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.460 [2024-07-23 13:50:19.355595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.860 13:50:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:29.860 00:08:29.860 SPDK Configuration: 00:08:29.860 Core mask: 0x1 00:08:29.860 00:08:29.860 Accel Perf Configuration: 00:08:29.860 Workload Type: decompress 00:08:29.860 Transfer size: 111250 bytes 00:08:29.860 Vector count 1 00:08:29.860 Module: software 00:08:29.860 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:29.860 Queue depth: 32 00:08:29.860 Allocate depth: 32 00:08:29.860 # threads/core: 1 00:08:29.860 Run time: 1 seconds 00:08:29.860 Verify: Yes 00:08:29.860 00:08:29.860 Running for 1 seconds... 00:08:29.860 00:08:29.860 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:29.860 ------------------------------------------------------------------------------------ 00:08:29.861 0,0 3840/s 158 MiB/s 0 0 00:08:29.861 ==================================================================================== 00:08:29.861 Total 3840/s 407 MiB/s 0 0' 00:08:29.861 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:29.861 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:29.861 13:50:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:29.861 13:50:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:29.861 13:50:20 -- accel/accel.sh@12 -- # build_accel_config 00:08:29.861 13:50:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:29.861 13:50:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.861 13:50:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.861 13:50:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:29.861 13:50:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:29.861 13:50:20 -- accel/accel.sh@41 -- # local IFS=, 00:08:29.861 13:50:20 -- accel/accel.sh@42 -- # jq -r . 00:08:29.861 [2024-07-23 13:50:20.601081] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:29.861 [2024-07-23 13:50:20.601171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3905913 ] 00:08:29.861 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.861 [2024-07-23 13:50:20.722787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.861 [2024-07-23 13:50:20.832020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.861 13:50:20 -- accel/accel.sh@21 -- # val= 00:08:29.861 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:29.861 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:29.861 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:29.861 13:50:20 -- accel/accel.sh@21 -- # val= 00:08:29.861 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:29.861 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:29.861 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:29.861 13:50:20 -- accel/accel.sh@21 -- # val= 00:08:29.861 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:29.861 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:29.861 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val=0x1 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val= 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val= 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val=decompress 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val= 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val=software 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@23 -- # accel_module=software 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val=32 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val=32 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val=1 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val=Yes 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val= 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:30.142 13:50:20 -- accel/accel.sh@21 -- # val= 00:08:30.142 13:50:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # IFS=: 00:08:30.142 13:50:20 -- accel/accel.sh@20 -- # read -r var val 00:08:31.081 13:50:22 -- accel/accel.sh@21 -- # val= 00:08:31.081 13:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # IFS=: 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # read -r var val 00:08:31.081 13:50:22 -- accel/accel.sh@21 -- # val= 00:08:31.081 13:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # IFS=: 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # read -r var val 00:08:31.081 13:50:22 -- accel/accel.sh@21 -- # val= 00:08:31.081 13:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # IFS=: 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # read -r var val 00:08:31.081 13:50:22 -- accel/accel.sh@21 -- # val= 00:08:31.081 13:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # IFS=: 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # read -r var val 00:08:31.081 13:50:22 -- accel/accel.sh@21 -- # val= 00:08:31.081 13:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # IFS=: 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # read -r var val 00:08:31.081 13:50:22 -- accel/accel.sh@21 -- # val= 00:08:31.081 13:50:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # IFS=: 00:08:31.081 13:50:22 -- accel/accel.sh@20 -- # read -r var val 00:08:31.081 13:50:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:31.081 13:50:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:31.081 13:50:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.081 00:08:31.081 real 0m2.943s 00:08:31.081 user 0m2.569s 00:08:31.081 sys 0m0.376s 00:08:31.081 13:50:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.081 13:50:22 -- common/autotest_common.sh@10 -- # set +x 00:08:31.081 ************************************ 00:08:31.081 END TEST accel_decmop_full 00:08:31.081 ************************************ 00:08:31.081 13:50:22 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:31.081 13:50:22 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:31.081 13:50:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.081 13:50:22 -- common/autotest_common.sh@10 -- # set +x 00:08:31.081 ************************************ 00:08:31.081 START TEST accel_decomp_mcore 00:08:31.081 ************************************ 00:08:31.082 13:50:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:31.082 13:50:22 -- accel/accel.sh@16 -- # local accel_opc 00:08:31.082 13:50:22 -- accel/accel.sh@17 -- # local accel_module 00:08:31.082 13:50:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:31.341 13:50:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:31.341 13:50:22 -- accel/accel.sh@12 -- # build_accel_config 00:08:31.341 13:50:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:31.341 13:50:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.341 13:50:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.341 13:50:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:31.341 13:50:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:31.341 13:50:22 -- accel/accel.sh@41 -- # local IFS=, 00:08:31.341 13:50:22 -- accel/accel.sh@42 -- # jq -r . 00:08:31.341 [2024-07-23 13:50:22.121491] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:31.341 [2024-07-23 13:50:22.121594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3906123 ] 00:08:31.341 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.341 [2024-07-23 13:50:22.239892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.341 [2024-07-23 13:50:22.336236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.341 [2024-07-23 13:50:22.336320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.341 [2024-07-23 13:50:22.336426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.341 [2024-07-23 13:50:22.336427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.720 13:50:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:32.720 00:08:32.720 SPDK Configuration: 00:08:32.720 Core mask: 0xf 00:08:32.720 00:08:32.720 Accel Perf Configuration: 00:08:32.720 Workload Type: decompress 00:08:32.720 Transfer size: 4096 bytes 00:08:32.720 Vector count 1 00:08:32.720 Module: software 00:08:32.720 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:32.720 Queue depth: 32 00:08:32.720 Allocate depth: 32 00:08:32.720 # threads/core: 1 00:08:32.720 Run time: 1 seconds 00:08:32.720 Verify: Yes 00:08:32.720 00:08:32.720 Running for 1 seconds... 00:08:32.720 00:08:32.721 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:32.721 ------------------------------------------------------------------------------------ 00:08:32.721 0,0 54560/s 100 MiB/s 0 0 00:08:32.721 3,0 54880/s 101 MiB/s 0 0 00:08:32.721 2,0 76768/s 141 MiB/s 0 0 00:08:32.721 1,0 55008/s 101 MiB/s 0 0 00:08:32.721 ==================================================================================== 00:08:32.721 Total 241216/s 942 MiB/s 0 0' 00:08:32.721 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.721 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.721 13:50:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:32.721 13:50:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:32.721 13:50:23 -- accel/accel.sh@12 -- # build_accel_config 00:08:32.721 13:50:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:32.721 13:50:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.721 13:50:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.721 13:50:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:32.721 13:50:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:32.721 13:50:23 -- accel/accel.sh@41 -- # local IFS=, 00:08:32.721 13:50:23 -- accel/accel.sh@42 -- # jq -r . 00:08:32.721 [2024-07-23 13:50:23.570398] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:32.721 [2024-07-23 13:50:23.570492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3906304 ] 00:08:32.721 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.721 [2024-07-23 13:50:23.691439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.980 [2024-07-23 13:50:23.788449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.980 [2024-07-23 13:50:23.788536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.980 [2024-07-23 13:50:23.788640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.980 [2024-07-23 13:50:23.788640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val= 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val= 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val= 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val=0xf 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val= 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val= 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val=decompress 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val= 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val=software 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@23 -- # accel_module=software 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val=32 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val=32 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val=1 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val=Yes 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val= 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:32.980 13:50:23 -- accel/accel.sh@21 -- # val= 00:08:32.980 13:50:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # IFS=: 00:08:32.980 13:50:23 -- accel/accel.sh@20 -- # read -r var val 00:08:34.361 13:50:24 -- accel/accel.sh@21 -- # val= 00:08:34.361 13:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # IFS=: 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # read -r var val 00:08:34.361 13:50:24 -- accel/accel.sh@21 -- # val= 00:08:34.361 13:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # IFS=: 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # read -r var val 00:08:34.361 13:50:24 -- accel/accel.sh@21 -- # val= 00:08:34.361 13:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # IFS=: 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # read -r var val 00:08:34.361 13:50:24 -- accel/accel.sh@21 -- # val= 00:08:34.361 13:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # IFS=: 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # read -r var val 00:08:34.361 13:50:24 -- accel/accel.sh@21 -- # val= 00:08:34.361 13:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # IFS=: 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # read -r var val 00:08:34.361 13:50:24 -- accel/accel.sh@21 -- # val= 00:08:34.361 13:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # IFS=: 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # read -r var val 00:08:34.361 13:50:24 -- accel/accel.sh@21 -- # val= 00:08:34.361 13:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # IFS=: 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # read -r var val 00:08:34.361 13:50:24 -- accel/accel.sh@21 -- # val= 00:08:34.361 13:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # IFS=: 00:08:34.361 13:50:24 -- accel/accel.sh@20 -- # read -r var val 00:08:34.361 13:50:24 -- accel/accel.sh@21 -- # val= 00:08:34.361 13:50:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.361 13:50:25 -- accel/accel.sh@20 -- # IFS=: 00:08:34.361 13:50:25 -- accel/accel.sh@20 -- # read -r var val 00:08:34.361 13:50:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:34.361 13:50:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:34.361 13:50:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.361 00:08:34.361 real 0m2.902s 00:08:34.361 user 0m9.280s 00:08:34.361 sys 0m0.368s 00:08:34.361 13:50:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.361 13:50:25 -- common/autotest_common.sh@10 -- # set +x 00:08:34.361 ************************************ 00:08:34.361 END TEST accel_decomp_mcore 00:08:34.361 ************************************ 00:08:34.361 13:50:25 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:34.361 13:50:25 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:34.361 13:50:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:34.361 13:50:25 -- common/autotest_common.sh@10 -- # set +x 00:08:34.361 ************************************ 00:08:34.361 START TEST accel_decomp_full_mcore 00:08:34.361 ************************************ 00:08:34.361 13:50:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:34.361 13:50:25 -- accel/accel.sh@16 -- # local accel_opc 00:08:34.361 13:50:25 -- accel/accel.sh@17 -- # local accel_module 00:08:34.361 13:50:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:34.361 13:50:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:34.361 13:50:25 -- accel/accel.sh@12 -- # build_accel_config 00:08:34.361 13:50:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:34.361 13:50:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.361 13:50:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.361 13:50:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:34.361 13:50:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:34.361 13:50:25 -- accel/accel.sh@41 -- # local IFS=, 00:08:34.361 13:50:25 -- accel/accel.sh@42 -- # jq -r . 00:08:34.361 [2024-07-23 13:50:25.072737] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:34.361 [2024-07-23 13:50:25.072828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3906505 ] 00:08:34.361 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.361 [2024-07-23 13:50:25.194374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.361 [2024-07-23 13:50:25.294757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.361 [2024-07-23 13:50:25.294842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.361 [2024-07-23 13:50:25.294947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.361 [2024-07-23 13:50:25.294948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.744 13:50:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:35.744 00:08:35.744 SPDK Configuration: 00:08:35.744 Core mask: 0xf 00:08:35.744 00:08:35.744 Accel Perf Configuration: 00:08:35.744 Workload Type: decompress 00:08:35.744 Transfer size: 111250 bytes 00:08:35.744 Vector count 1 00:08:35.744 Module: software 00:08:35.744 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:35.744 Queue depth: 32 00:08:35.744 Allocate depth: 32 00:08:35.744 # threads/core: 1 00:08:35.744 Run time: 1 seconds 00:08:35.744 Verify: Yes 00:08:35.744 00:08:35.744 Running for 1 seconds... 00:08:35.744 00:08:35.744 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:35.744 ------------------------------------------------------------------------------------ 00:08:35.744 0,0 3808/s 157 MiB/s 0 0 00:08:35.744 3,0 3840/s 158 MiB/s 0 0 00:08:35.744 2,0 5600/s 231 MiB/s 0 0 00:08:35.744 1,0 3840/s 158 MiB/s 0 0 00:08:35.744 ==================================================================================== 00:08:35.744 Total 17088/s 1812 MiB/s 0 0' 00:08:35.744 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:35.744 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:35.744 13:50:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:35.744 13:50:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:35.744 13:50:26 -- accel/accel.sh@12 -- # build_accel_config 00:08:35.744 13:50:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:35.744 13:50:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.744 13:50:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.744 13:50:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:35.744 13:50:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:35.744 13:50:26 -- accel/accel.sh@41 -- # local IFS=, 00:08:35.744 13:50:26 -- accel/accel.sh@42 -- # jq -r . 00:08:35.744 [2024-07-23 13:50:26.545758] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:35.744 [2024-07-23 13:50:26.545854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3906735 ] 00:08:35.744 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.744 [2024-07-23 13:50:26.667763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.004 [2024-07-23 13:50:26.768784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.004 [2024-07-23 13:50:26.768869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.004 [2024-07-23 13:50:26.768965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.004 [2024-07-23 13:50:26.768966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val= 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val= 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val= 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val=0xf 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val= 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val= 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val=decompress 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val= 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val=software 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@23 -- # accel_module=software 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val=32 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val=32 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val=1 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val=Yes 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val= 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:36.004 13:50:26 -- accel/accel.sh@21 -- # val= 00:08:36.004 13:50:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # IFS=: 00:08:36.004 13:50:26 -- accel/accel.sh@20 -- # read -r var val 00:08:37.384 13:50:28 -- accel/accel.sh@21 -- # val= 00:08:37.384 13:50:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # IFS=: 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # read -r var val 00:08:37.384 13:50:28 -- accel/accel.sh@21 -- # val= 00:08:37.384 13:50:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # IFS=: 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # read -r var val 00:08:37.384 13:50:28 -- accel/accel.sh@21 -- # val= 00:08:37.384 13:50:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # IFS=: 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # read -r var val 00:08:37.384 13:50:28 -- accel/accel.sh@21 -- # val= 00:08:37.384 13:50:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # IFS=: 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # read -r var val 00:08:37.384 13:50:28 -- accel/accel.sh@21 -- # val= 00:08:37.384 13:50:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # IFS=: 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # read -r var val 00:08:37.384 13:50:28 -- accel/accel.sh@21 -- # val= 00:08:37.384 13:50:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # IFS=: 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # read -r var val 00:08:37.384 13:50:28 -- accel/accel.sh@21 -- # val= 00:08:37.384 13:50:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # IFS=: 00:08:37.384 13:50:28 -- accel/accel.sh@20 -- # read -r var val 00:08:37.384 13:50:28 -- accel/accel.sh@21 -- # val= 00:08:37.384 13:50:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.385 13:50:28 -- accel/accel.sh@20 -- # IFS=: 00:08:37.385 13:50:28 -- accel/accel.sh@20 -- # read -r var val 00:08:37.385 13:50:28 -- accel/accel.sh@21 -- # val= 00:08:37.385 13:50:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:37.385 13:50:28 -- accel/accel.sh@20 -- # IFS=: 00:08:37.385 13:50:28 -- accel/accel.sh@20 -- # read -r var val 00:08:37.385 13:50:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:37.385 13:50:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:37.385 13:50:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.385 00:08:37.385 real 0m2.963s 00:08:37.385 user 0m9.414s 00:08:37.385 sys 0m0.403s 00:08:37.385 13:50:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.385 13:50:28 -- common/autotest_common.sh@10 -- # set +x 00:08:37.385 ************************************ 00:08:37.385 END TEST accel_decomp_full_mcore 00:08:37.385 ************************************ 00:08:37.385 13:50:28 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:37.385 13:50:28 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:37.385 13:50:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.385 13:50:28 -- common/autotest_common.sh@10 -- # set +x 00:08:37.385 ************************************ 00:08:37.385 START TEST accel_decomp_mthread 00:08:37.385 ************************************ 00:08:37.385 13:50:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:37.385 13:50:28 -- accel/accel.sh@16 -- # local accel_opc 00:08:37.385 13:50:28 -- accel/accel.sh@17 -- # local accel_module 00:08:37.385 13:50:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:37.385 13:50:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:37.385 13:50:28 -- accel/accel.sh@12 -- # build_accel_config 00:08:37.385 13:50:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:37.385 13:50:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:37.385 13:50:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.385 13:50:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:37.385 13:50:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:37.385 13:50:28 -- accel/accel.sh@41 -- # local IFS=, 00:08:37.385 13:50:28 -- accel/accel.sh@42 -- # jq -r . 00:08:37.385 [2024-07-23 13:50:28.082730] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:37.385 [2024-07-23 13:50:28.082831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907014 ] 00:08:37.385 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.385 [2024-07-23 13:50:28.201690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.385 [2024-07-23 13:50:28.298571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.765 13:50:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:38.765 00:08:38.765 SPDK Configuration: 00:08:38.765 Core mask: 0x1 00:08:38.765 00:08:38.765 Accel Perf Configuration: 00:08:38.765 Workload Type: decompress 00:08:38.765 Transfer size: 4096 bytes 00:08:38.765 Vector count 1 00:08:38.765 Module: software 00:08:38.765 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:38.765 Queue depth: 32 00:08:38.765 Allocate depth: 32 00:08:38.765 # threads/core: 2 00:08:38.765 Run time: 1 seconds 00:08:38.765 Verify: Yes 00:08:38.765 00:08:38.765 Running for 1 seconds... 00:08:38.765 00:08:38.765 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:38.765 ------------------------------------------------------------------------------------ 00:08:38.765 0,1 31104/s 57 MiB/s 0 0 00:08:38.765 0,0 31008/s 57 MiB/s 0 0 00:08:38.765 ==================================================================================== 00:08:38.765 Total 62112/s 242 MiB/s 0 0' 00:08:38.765 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:38.765 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:38.765 13:50:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:38.765 13:50:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:38.765 13:50:29 -- accel/accel.sh@12 -- # build_accel_config 00:08:38.765 13:50:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:38.765 13:50:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.765 13:50:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.765 13:50:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:38.765 13:50:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:38.765 13:50:29 -- accel/accel.sh@41 -- # local IFS=, 00:08:38.765 13:50:29 -- accel/accel.sh@42 -- # jq -r . 00:08:38.765 [2024-07-23 13:50:29.538189] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:38.765 [2024-07-23 13:50:29.538276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907239 ] 00:08:38.765 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.765 [2024-07-23 13:50:29.658250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.765 [2024-07-23 13:50:29.754678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.024 13:50:29 -- accel/accel.sh@21 -- # val= 00:08:39.024 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.024 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.024 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val= 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val= 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val=0x1 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val= 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val= 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val=decompress 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val= 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val=software 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@23 -- # accel_module=software 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val=32 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val=32 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val=2 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val=Yes 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val= 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.025 13:50:29 -- accel/accel.sh@21 -- # val= 00:08:39.025 13:50:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # IFS=: 00:08:39.025 13:50:29 -- accel/accel.sh@20 -- # read -r var val 00:08:39.962 13:50:30 -- accel/accel.sh@21 -- # val= 00:08:39.962 13:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # IFS=: 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # read -r var val 00:08:39.962 13:50:30 -- accel/accel.sh@21 -- # val= 00:08:39.962 13:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # IFS=: 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # read -r var val 00:08:39.962 13:50:30 -- accel/accel.sh@21 -- # val= 00:08:39.962 13:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # IFS=: 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # read -r var val 00:08:39.962 13:50:30 -- accel/accel.sh@21 -- # val= 00:08:39.962 13:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # IFS=: 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # read -r var val 00:08:39.962 13:50:30 -- accel/accel.sh@21 -- # val= 00:08:39.962 13:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # IFS=: 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # read -r var val 00:08:39.962 13:50:30 -- accel/accel.sh@21 -- # val= 00:08:39.962 13:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # IFS=: 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # read -r var val 00:08:39.962 13:50:30 -- accel/accel.sh@21 -- # val= 00:08:39.962 13:50:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # IFS=: 00:08:39.962 13:50:30 -- accel/accel.sh@20 -- # read -r var val 00:08:39.962 13:50:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:39.962 13:50:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:39.962 13:50:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.962 00:08:39.962 real 0m2.918s 00:08:39.962 user 0m2.576s 00:08:39.962 sys 0m0.345s 00:08:39.962 13:50:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.962 13:50:30 -- common/autotest_common.sh@10 -- # set +x 00:08:39.962 ************************************ 00:08:39.962 END TEST accel_decomp_mthread 00:08:39.962 ************************************ 00:08:40.222 13:50:31 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:40.222 13:50:31 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:40.222 13:50:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.222 13:50:31 -- common/autotest_common.sh@10 -- # set +x 00:08:40.222 ************************************ 00:08:40.222 START TEST accel_deomp_full_mthread 00:08:40.222 ************************************ 00:08:40.222 13:50:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:40.222 13:50:31 -- accel/accel.sh@16 -- # local accel_opc 00:08:40.222 13:50:31 -- accel/accel.sh@17 -- # local accel_module 00:08:40.222 13:50:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:40.222 13:50:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:40.222 13:50:31 -- accel/accel.sh@12 -- # build_accel_config 00:08:40.222 13:50:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:40.222 13:50:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:40.222 13:50:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:40.222 13:50:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:40.222 13:50:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:40.222 13:50:31 -- accel/accel.sh@41 -- # local IFS=, 00:08:40.222 13:50:31 -- accel/accel.sh@42 -- # jq -r . 00:08:40.222 [2024-07-23 13:50:31.047488] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:40.222 [2024-07-23 13:50:31.047581] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907434 ] 00:08:40.222 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.222 [2024-07-23 13:50:31.167375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.481 [2024-07-23 13:50:31.264486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.860 13:50:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:41.860 00:08:41.860 SPDK Configuration: 00:08:41.860 Core mask: 0x1 00:08:41.860 00:08:41.860 Accel Perf Configuration: 00:08:41.860 Workload Type: decompress 00:08:41.860 Transfer size: 111250 bytes 00:08:41.860 Vector count 1 00:08:41.860 Module: software 00:08:41.860 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:41.860 Queue depth: 32 00:08:41.860 Allocate depth: 32 00:08:41.860 # threads/core: 2 00:08:41.860 Run time: 1 seconds 00:08:41.860 Verify: Yes 00:08:41.860 00:08:41.860 Running for 1 seconds... 00:08:41.860 00:08:41.860 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:41.860 ------------------------------------------------------------------------------------ 00:08:41.860 0,1 1952/s 80 MiB/s 0 0 00:08:41.860 0,0 1952/s 80 MiB/s 0 0 00:08:41.860 ==================================================================================== 00:08:41.860 Total 3904/s 414 MiB/s 0 0' 00:08:41.860 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.860 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.860 13:50:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:41.860 13:50:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:41.860 13:50:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:41.860 13:50:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:41.860 13:50:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.860 13:50:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.860 13:50:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:41.860 13:50:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:41.860 13:50:32 -- accel/accel.sh@41 -- # local IFS=, 00:08:41.860 13:50:32 -- accel/accel.sh@42 -- # jq -r . 00:08:41.860 [2024-07-23 13:50:32.529554] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:41.860 [2024-07-23 13:50:32.529648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907618 ] 00:08:41.861 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.861 [2024-07-23 13:50:32.650601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.861 [2024-07-23 13:50:32.747384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val= 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val= 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val= 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val=0x1 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val= 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val= 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val=decompress 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val= 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val=software 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@23 -- # accel_module=software 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val=32 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val=32 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val=2 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val=Yes 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val= 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:41.861 13:50:32 -- accel/accel.sh@21 -- # val= 00:08:41.861 13:50:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # IFS=: 00:08:41.861 13:50:32 -- accel/accel.sh@20 -- # read -r var val 00:08:43.240 13:50:33 -- accel/accel.sh@21 -- # val= 00:08:43.240 13:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # IFS=: 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # read -r var val 00:08:43.241 13:50:33 -- accel/accel.sh@21 -- # val= 00:08:43.241 13:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # IFS=: 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # read -r var val 00:08:43.241 13:50:33 -- accel/accel.sh@21 -- # val= 00:08:43.241 13:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # IFS=: 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # read -r var val 00:08:43.241 13:50:33 -- accel/accel.sh@21 -- # val= 00:08:43.241 13:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # IFS=: 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # read -r var val 00:08:43.241 13:50:33 -- accel/accel.sh@21 -- # val= 00:08:43.241 13:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # IFS=: 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # read -r var val 00:08:43.241 13:50:33 -- accel/accel.sh@21 -- # val= 00:08:43.241 13:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # IFS=: 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # read -r var val 00:08:43.241 13:50:33 -- accel/accel.sh@21 -- # val= 00:08:43.241 13:50:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # IFS=: 00:08:43.241 13:50:33 -- accel/accel.sh@20 -- # read -r var val 00:08:43.241 13:50:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:43.241 13:50:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:43.241 13:50:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:43.241 00:08:43.241 real 0m2.969s 00:08:43.241 user 0m2.609s 00:08:43.241 sys 0m0.364s 00:08:43.241 13:50:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.241 13:50:33 -- common/autotest_common.sh@10 -- # set +x 00:08:43.241 ************************************ 00:08:43.241 END TEST accel_deomp_full_mthread 00:08:43.241 ************************************ 00:08:43.241 13:50:34 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:43.241 13:50:34 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:43.241 13:50:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:43.241 13:50:34 -- accel/accel.sh@129 -- # build_accel_config 00:08:43.241 13:50:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.241 13:50:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:43.241 13:50:34 -- common/autotest_common.sh@10 -- # set +x 00:08:43.241 13:50:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:43.241 13:50:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:43.241 13:50:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:43.241 13:50:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:43.241 13:50:34 -- accel/accel.sh@41 -- # local IFS=, 00:08:43.241 13:50:34 -- accel/accel.sh@42 -- # jq -r . 00:08:43.241 ************************************ 00:08:43.241 START TEST accel_dif_functional_tests 00:08:43.241 ************************************ 00:08:43.241 13:50:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:43.241 [2024-07-23 13:50:34.066703] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:43.241 [2024-07-23 13:50:34.066800] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907818 ] 00:08:43.241 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.241 [2024-07-23 13:50:34.185814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:43.500 [2024-07-23 13:50:34.287125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.500 [2024-07-23 13:50:34.287219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.500 [2024-07-23 13:50:34.287221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.500 00:08:43.500 00:08:43.500 CUnit - A unit testing framework for C - Version 2.1-3 00:08:43.500 http://cunit.sourceforge.net/ 00:08:43.500 00:08:43.500 00:08:43.500 Suite: accel_dif 00:08:43.500 Test: verify: DIF generated, GUARD check ...passed 00:08:43.500 Test: verify: DIF generated, APPTAG check ...passed 00:08:43.500 Test: verify: DIF generated, REFTAG check ...passed 00:08:43.500 Test: verify: DIF not generated, GUARD check ...[2024-07-23 13:50:34.361693] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:43.500 [2024-07-23 13:50:34.361752] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:43.500 passed 00:08:43.500 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 13:50:34.361808] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:43.500 [2024-07-23 13:50:34.361834] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:43.500 passed 00:08:43.500 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 13:50:34.361864] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:43.500 [2024-07-23 13:50:34.361890] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:43.500 passed 00:08:43.500 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:43.500 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 13:50:34.361951] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:43.500 passed 00:08:43.500 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:43.500 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:43.500 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:43.500 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 13:50:34.362088] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:43.500 passed 00:08:43.500 Test: generate copy: DIF generated, GUARD check ...passed 00:08:43.500 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:43.501 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:43.501 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:43.501 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:43.501 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:43.501 Test: generate copy: iovecs-len validate ...[2024-07-23 13:50:34.362334] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:43.501 passed 00:08:43.501 Test: generate copy: buffer alignment validate ...passed 00:08:43.501 00:08:43.501 Run Summary: Type Total Ran Passed Failed Inactive 00:08:43.501 suites 1 1 n/a 0 0 00:08:43.501 tests 20 20 20 0 0 00:08:43.501 asserts 204 204 204 0 n/a 00:08:43.501 00:08:43.501 Elapsed time = 0.003 seconds 00:08:43.760 00:08:43.760 real 0m0.511s 00:08:43.760 user 0m0.694s 00:08:43.760 sys 0m0.201s 00:08:43.760 13:50:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.760 13:50:34 -- common/autotest_common.sh@10 -- # set +x 00:08:43.760 ************************************ 00:08:43.760 END TEST accel_dif_functional_tests 00:08:43.760 ************************************ 00:08:43.760 00:08:43.760 real 1m2.254s 00:08:43.760 user 1m7.833s 00:08:43.760 sys 0m9.374s 00:08:43.760 13:50:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.760 13:50:34 -- common/autotest_common.sh@10 -- # set +x 00:08:43.760 ************************************ 00:08:43.760 END TEST accel 00:08:43.760 ************************************ 00:08:43.760 13:50:34 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:43.760 13:50:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:43.760 13:50:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.760 13:50:34 -- common/autotest_common.sh@10 -- # set +x 00:08:43.760 ************************************ 00:08:43.760 START TEST accel_rpc 00:08:43.760 ************************************ 00:08:43.760 13:50:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:43.760 * Looking for test storage... 00:08:43.760 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:08:43.760 13:50:34 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:43.760 13:50:34 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3907947 00:08:43.760 13:50:34 -- accel/accel_rpc.sh@15 -- # waitforlisten 3907947 00:08:43.760 13:50:34 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:43.760 13:50:34 -- common/autotest_common.sh@819 -- # '[' -z 3907947 ']' 00:08:43.760 13:50:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.760 13:50:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:43.760 13:50:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.760 13:50:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:43.760 13:50:34 -- common/autotest_common.sh@10 -- # set +x 00:08:43.760 [2024-07-23 13:50:34.761206] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:43.760 [2024-07-23 13:50:34.761303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907947 ] 00:08:44.020 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.020 [2024-07-23 13:50:34.874936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.020 [2024-07-23 13:50:34.971856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:44.020 [2024-07-23 13:50:34.972001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.958 13:50:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:44.958 13:50:35 -- common/autotest_common.sh@852 -- # return 0 00:08:44.958 13:50:35 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:44.958 13:50:35 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:44.958 13:50:35 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:44.958 13:50:35 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:44.958 13:50:35 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:44.958 13:50:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:44.958 13:50:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.958 13:50:35 -- common/autotest_common.sh@10 -- # set +x 00:08:44.958 ************************************ 00:08:44.958 START TEST accel_assign_opcode 00:08:44.958 ************************************ 00:08:44.958 13:50:35 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:08:44.958 13:50:35 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:44.958 13:50:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.958 13:50:35 -- common/autotest_common.sh@10 -- # set +x 00:08:44.958 [2024-07-23 13:50:35.734372] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:44.958 13:50:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.958 13:50:35 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:44.958 13:50:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.958 13:50:35 -- common/autotest_common.sh@10 -- # set +x 00:08:44.958 [2024-07-23 13:50:35.742387] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:44.958 13:50:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.958 13:50:35 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:44.958 13:50:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.958 13:50:35 -- common/autotest_common.sh@10 -- # set +x 00:08:44.958 13:50:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.958 13:50:35 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:44.958 13:50:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.958 13:50:35 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:44.958 13:50:35 -- accel/accel_rpc.sh@42 -- # grep software 00:08:44.958 13:50:35 -- common/autotest_common.sh@10 -- # set +x 00:08:44.958 13:50:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:45.217 software 00:08:45.218 00:08:45.218 real 0m0.275s 00:08:45.218 user 0m0.049s 00:08:45.218 sys 0m0.013s 00:08:45.218 13:50:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.218 13:50:36 -- common/autotest_common.sh@10 -- # set +x 00:08:45.218 ************************************ 00:08:45.218 END TEST accel_assign_opcode 00:08:45.218 ************************************ 00:08:45.218 13:50:36 -- accel/accel_rpc.sh@55 -- # killprocess 3907947 00:08:45.218 13:50:36 -- common/autotest_common.sh@926 -- # '[' -z 3907947 ']' 00:08:45.218 13:50:36 -- common/autotest_common.sh@930 -- # kill -0 3907947 00:08:45.218 13:50:36 -- common/autotest_common.sh@931 -- # uname 00:08:45.218 13:50:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:45.218 13:50:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3907947 00:08:45.218 13:50:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:45.218 13:50:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:45.218 13:50:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3907947' 00:08:45.218 killing process with pid 3907947 00:08:45.218 13:50:36 -- common/autotest_common.sh@945 -- # kill 3907947 00:08:45.218 13:50:36 -- common/autotest_common.sh@950 -- # wait 3907947 00:08:45.477 00:08:45.477 real 0m1.818s 00:08:45.477 user 0m1.892s 00:08:45.477 sys 0m0.550s 00:08:45.477 13:50:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.477 13:50:36 -- common/autotest_common.sh@10 -- # set +x 00:08:45.477 ************************************ 00:08:45.477 END TEST accel_rpc 00:08:45.477 ************************************ 00:08:45.735 13:50:36 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:45.735 13:50:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:45.735 13:50:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.735 13:50:36 -- common/autotest_common.sh@10 -- # set +x 00:08:45.735 ************************************ 00:08:45.735 START TEST app_cmdline 00:08:45.735 ************************************ 00:08:45.735 13:50:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:08:45.735 * Looking for test storage... 00:08:45.735 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:45.735 13:50:36 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:45.735 13:50:36 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3908303 00:08:45.735 13:50:36 -- app/cmdline.sh@18 -- # waitforlisten 3908303 00:08:45.735 13:50:36 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:45.735 13:50:36 -- common/autotest_common.sh@819 -- # '[' -z 3908303 ']' 00:08:45.736 13:50:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.736 13:50:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:45.736 13:50:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.736 13:50:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:45.736 13:50:36 -- common/autotest_common.sh@10 -- # set +x 00:08:45.736 [2024-07-23 13:50:36.637839] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:45.736 [2024-07-23 13:50:36.637930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908303 ] 00:08:45.736 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.994 [2024-07-23 13:50:36.760016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.994 [2024-07-23 13:50:36.855289] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.994 [2024-07-23 13:50:36.855445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.572 13:50:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.572 13:50:37 -- common/autotest_common.sh@852 -- # return 0 00:08:46.572 13:50:37 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:46.865 { 00:08:46.865 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:08:46.865 "fields": { 00:08:46.865 "major": 24, 00:08:46.865 "minor": 1, 00:08:46.865 "patch": 1, 00:08:46.865 "suffix": "-pre", 00:08:46.865 "commit": "dbef7efac" 00:08:46.865 } 00:08:46.865 } 00:08:46.865 13:50:37 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:46.865 13:50:37 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:46.865 13:50:37 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:46.865 13:50:37 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:46.865 13:50:37 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:46.865 13:50:37 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:46.865 13:50:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.865 13:50:37 -- app/cmdline.sh@26 -- # sort 00:08:46.865 13:50:37 -- common/autotest_common.sh@10 -- # set +x 00:08:46.865 13:50:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.865 13:50:37 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:46.865 13:50:37 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:46.865 13:50:37 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:46.865 13:50:37 -- common/autotest_common.sh@640 -- # local es=0 00:08:46.865 13:50:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:46.865 13:50:37 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:46.865 13:50:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:46.865 13:50:37 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:46.865 13:50:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:46.865 13:50:37 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:46.865 13:50:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:46.865 13:50:37 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:08:46.865 13:50:37 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:08:46.865 13:50:37 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:47.137 request: 00:08:47.137 { 00:08:47.137 "method": "env_dpdk_get_mem_stats", 00:08:47.137 "req_id": 1 00:08:47.137 } 00:08:47.137 Got JSON-RPC error response 00:08:47.137 response: 00:08:47.137 { 00:08:47.137 "code": -32601, 00:08:47.137 "message": "Method not found" 00:08:47.137 } 00:08:47.137 13:50:38 -- common/autotest_common.sh@643 -- # es=1 00:08:47.137 13:50:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:47.137 13:50:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:47.137 13:50:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:47.137 13:50:38 -- app/cmdline.sh@1 -- # killprocess 3908303 00:08:47.137 13:50:38 -- common/autotest_common.sh@926 -- # '[' -z 3908303 ']' 00:08:47.137 13:50:38 -- common/autotest_common.sh@930 -- # kill -0 3908303 00:08:47.137 13:50:38 -- common/autotest_common.sh@931 -- # uname 00:08:47.137 13:50:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:47.137 13:50:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3908303 00:08:47.137 13:50:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:47.137 13:50:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:47.137 13:50:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3908303' 00:08:47.137 killing process with pid 3908303 00:08:47.137 13:50:38 -- common/autotest_common.sh@945 -- # kill 3908303 00:08:47.137 13:50:38 -- common/autotest_common.sh@950 -- # wait 3908303 00:08:47.704 00:08:47.704 real 0m1.926s 00:08:47.704 user 0m2.321s 00:08:47.704 sys 0m0.573s 00:08:47.704 13:50:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.704 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:08:47.704 ************************************ 00:08:47.704 END TEST app_cmdline 00:08:47.704 ************************************ 00:08:47.704 13:50:38 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:47.704 13:50:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:47.704 13:50:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.704 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:08:47.704 ************************************ 00:08:47.704 START TEST version 00:08:47.704 ************************************ 00:08:47.704 13:50:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:08:47.704 * Looking for test storage... 00:08:47.704 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:47.704 13:50:38 -- app/version.sh@17 -- # get_header_version major 00:08:47.704 13:50:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:47.704 13:50:38 -- app/version.sh@14 -- # cut -f2 00:08:47.704 13:50:38 -- app/version.sh@14 -- # tr -d '"' 00:08:47.704 13:50:38 -- app/version.sh@17 -- # major=24 00:08:47.704 13:50:38 -- app/version.sh@18 -- # get_header_version minor 00:08:47.704 13:50:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:47.704 13:50:38 -- app/version.sh@14 -- # tr -d '"' 00:08:47.704 13:50:38 -- app/version.sh@14 -- # cut -f2 00:08:47.704 13:50:38 -- app/version.sh@18 -- # minor=1 00:08:47.704 13:50:38 -- app/version.sh@19 -- # get_header_version patch 00:08:47.704 13:50:38 -- app/version.sh@14 -- # cut -f2 00:08:47.704 13:50:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:47.704 13:50:38 -- app/version.sh@14 -- # tr -d '"' 00:08:47.704 13:50:38 -- app/version.sh@19 -- # patch=1 00:08:47.704 13:50:38 -- app/version.sh@20 -- # get_header_version suffix 00:08:47.704 13:50:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:08:47.704 13:50:38 -- app/version.sh@14 -- # cut -f2 00:08:47.704 13:50:38 -- app/version.sh@14 -- # tr -d '"' 00:08:47.704 13:50:38 -- app/version.sh@20 -- # suffix=-pre 00:08:47.704 13:50:38 -- app/version.sh@22 -- # version=24.1 00:08:47.704 13:50:38 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:47.704 13:50:38 -- app/version.sh@25 -- # version=24.1.1 00:08:47.704 13:50:38 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:47.704 13:50:38 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:47.704 13:50:38 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:47.704 13:50:38 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:47.704 13:50:38 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:47.704 00:08:47.704 real 0m0.184s 00:08:47.704 user 0m0.096s 00:08:47.704 sys 0m0.130s 00:08:47.704 13:50:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:47.704 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:08:47.704 ************************************ 00:08:47.704 END TEST version 00:08:47.704 ************************************ 00:08:47.704 13:50:38 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:08:47.704 13:50:38 -- spdk/autotest.sh@204 -- # uname -s 00:08:47.704 13:50:38 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:08:47.704 13:50:38 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:47.704 13:50:38 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:47.704 13:50:38 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:08:47.704 13:50:38 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:08:47.704 13:50:38 -- spdk/autotest.sh@268 -- # timing_exit lib 00:08:47.704 13:50:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:47.704 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:08:47.962 13:50:38 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:08:47.962 13:50:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:08:47.962 13:50:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:08:47.962 13:50:38 -- spdk/autotest.sh@374 -- # [[ 1 -eq 1 ]] 00:08:47.962 13:50:38 -- spdk/autotest.sh@375 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:47.962 13:50:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:47.962 13:50:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.962 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:08:47.962 ************************************ 00:08:47.962 START TEST llvm_fuzz 00:08:47.962 ************************************ 00:08:47.962 13:50:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:08:47.962 * Looking for test storage... 00:08:47.962 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:08:47.962 13:50:38 -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:08:47.962 13:50:38 -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:08:47.962 13:50:38 -- common/autotest_common.sh@538 -- # fuzzers=() 00:08:47.962 13:50:38 -- common/autotest_common.sh@538 -- # local fuzzers 00:08:47.962 13:50:38 -- common/autotest_common.sh@540 -- # [[ -n '' ]] 00:08:47.962 13:50:38 -- common/autotest_common.sh@543 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:08:47.962 13:50:38 -- common/autotest_common.sh@544 -- # fuzzers=("${fuzzers[@]##*/}") 00:08:47.962 13:50:38 -- common/autotest_common.sh@547 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:08:47.962 13:50:38 -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:08:47.962 13:50:38 -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:08:47.962 13:50:38 -- fuzz/llvm.sh@56 -- # [[ 1 -eq 0 ]] 00:08:47.962 13:50:38 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:47.962 13:50:38 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:47.962 13:50:38 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:47.962 13:50:38 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:47.962 13:50:38 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:08:47.962 13:50:38 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:08:47.962 13:50:38 -- fuzz/llvm.sh@62 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:47.962 13:50:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:47.962 13:50:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:47.962 13:50:38 -- common/autotest_common.sh@10 -- # set +x 00:08:47.962 ************************************ 00:08:47.962 START TEST nvmf_fuzz 00:08:47.962 ************************************ 00:08:47.962 13:50:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:08:48.223 * Looking for test storage... 00:08:48.224 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:48.224 13:50:38 -- nvmf/run.sh@52 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:48.224 13:50:38 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:48.224 13:50:38 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:48.224 13:50:38 -- common/autotest_common.sh@34 -- # set -e 00:08:48.224 13:50:38 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:48.224 13:50:38 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:48.224 13:50:38 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:48.224 13:50:38 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:48.224 13:50:38 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:48.224 13:50:38 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:48.224 13:50:38 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:48.224 13:50:38 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:48.224 13:50:38 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:48.224 13:50:38 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:48.224 13:50:38 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:48.224 13:50:38 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:48.224 13:50:38 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:48.224 13:50:38 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:48.224 13:50:38 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:48.224 13:50:38 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:48.224 13:50:38 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:48.224 13:50:38 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:48.224 13:50:38 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:48.224 13:50:38 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:48.224 13:50:38 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:48.224 13:50:38 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:48.224 13:50:38 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:48.224 13:50:38 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:48.224 13:50:38 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:48.224 13:50:38 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:48.224 13:50:38 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:48.224 13:50:38 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:48.224 13:50:38 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:48.224 13:50:38 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:48.224 13:50:38 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:48.224 13:50:38 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:48.224 13:50:38 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:48.224 13:50:38 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:48.224 13:50:38 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:48.224 13:50:38 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:48.224 13:50:38 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:48.224 13:50:38 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:48.224 13:50:38 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:48.224 13:50:38 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:48.224 13:50:38 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:48.224 13:50:38 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:48.224 13:50:38 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:48.224 13:50:38 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:48.224 13:50:38 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:48.224 13:50:38 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:48.224 13:50:38 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:48.224 13:50:38 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:48.224 13:50:38 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:48.224 13:50:38 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:48.224 13:50:38 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:48.224 13:50:38 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:48.224 13:50:38 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:48.224 13:50:38 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:48.224 13:50:38 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:48.224 13:50:38 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:48.224 13:50:38 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:48.224 13:50:38 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:48.224 13:50:38 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:48.224 13:50:38 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:48.224 13:50:38 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:48.224 13:50:38 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:48.224 13:50:38 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:48.224 13:50:38 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:48.224 13:50:38 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:08:48.224 13:50:38 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:48.224 13:50:38 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:48.224 13:50:38 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:08:48.224 13:50:38 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:48.224 13:50:39 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:48.224 13:50:39 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:48.224 13:50:39 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:48.224 13:50:39 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:48.224 13:50:39 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:48.224 13:50:39 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:48.224 13:50:39 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:48.224 13:50:39 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:48.224 13:50:39 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:48.224 13:50:39 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:48.224 13:50:39 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:48.224 13:50:39 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:48.224 13:50:39 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:48.224 13:50:39 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:48.224 13:50:39 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:48.224 13:50:39 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:48.224 13:50:39 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:48.224 13:50:39 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:48.224 13:50:39 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:48.224 13:50:39 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:48.224 13:50:39 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:48.224 13:50:39 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:48.224 13:50:39 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:48.224 13:50:39 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:48.224 13:50:39 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:48.224 13:50:39 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:48.224 13:50:39 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:48.224 13:50:39 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:48.224 13:50:39 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:48.224 13:50:39 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:48.224 #define SPDK_CONFIG_H 00:08:48.224 #define SPDK_CONFIG_APPS 1 00:08:48.224 #define SPDK_CONFIG_ARCH native 00:08:48.224 #undef SPDK_CONFIG_ASAN 00:08:48.224 #undef SPDK_CONFIG_AVAHI 00:08:48.224 #undef SPDK_CONFIG_CET 00:08:48.224 #define SPDK_CONFIG_COVERAGE 1 00:08:48.224 #define SPDK_CONFIG_CROSS_PREFIX 00:08:48.224 #undef SPDK_CONFIG_CRYPTO 00:08:48.224 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:48.224 #undef SPDK_CONFIG_CUSTOMOCF 00:08:48.224 #undef SPDK_CONFIG_DAOS 00:08:48.224 #define SPDK_CONFIG_DAOS_DIR 00:08:48.224 #define SPDK_CONFIG_DEBUG 1 00:08:48.224 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:48.224 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:48.224 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:48.224 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:48.224 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:48.224 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:48.224 #define SPDK_CONFIG_EXAMPLES 1 00:08:48.224 #undef SPDK_CONFIG_FC 00:08:48.224 #define SPDK_CONFIG_FC_PATH 00:08:48.224 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:48.224 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:48.224 #undef SPDK_CONFIG_FUSE 00:08:48.224 #define SPDK_CONFIG_FUZZER 1 00:08:48.224 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:08:48.224 #undef SPDK_CONFIG_GOLANG 00:08:48.224 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:48.224 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:48.224 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:48.224 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:48.224 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:48.224 #define SPDK_CONFIG_IDXD 1 00:08:48.224 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:48.224 #undef SPDK_CONFIG_IPSEC_MB 00:08:48.224 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:48.224 #define SPDK_CONFIG_ISAL 1 00:08:48.224 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:48.224 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:48.224 #define SPDK_CONFIG_LIBDIR 00:08:48.224 #undef SPDK_CONFIG_LTO 00:08:48.224 #define SPDK_CONFIG_MAX_LCORES 00:08:48.224 #define SPDK_CONFIG_NVME_CUSE 1 00:08:48.224 #undef SPDK_CONFIG_OCF 00:08:48.224 #define SPDK_CONFIG_OCF_PATH 00:08:48.225 #define SPDK_CONFIG_OPENSSL_PATH 00:08:48.225 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:48.225 #undef SPDK_CONFIG_PGO_USE 00:08:48.225 #define SPDK_CONFIG_PREFIX /usr/local 00:08:48.225 #undef SPDK_CONFIG_RAID5F 00:08:48.225 #undef SPDK_CONFIG_RBD 00:08:48.225 #define SPDK_CONFIG_RDMA 1 00:08:48.225 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:48.225 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:48.225 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:48.225 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:48.225 #undef SPDK_CONFIG_SHARED 00:08:48.225 #undef SPDK_CONFIG_SMA 00:08:48.225 #define SPDK_CONFIG_TESTS 1 00:08:48.225 #undef SPDK_CONFIG_TSAN 00:08:48.225 #define SPDK_CONFIG_UBLK 1 00:08:48.225 #define SPDK_CONFIG_UBSAN 1 00:08:48.225 #undef SPDK_CONFIG_UNIT_TESTS 00:08:48.225 #undef SPDK_CONFIG_URING 00:08:48.225 #define SPDK_CONFIG_URING_PATH 00:08:48.225 #undef SPDK_CONFIG_URING_ZNS 00:08:48.225 #undef SPDK_CONFIG_USDT 00:08:48.225 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:48.225 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:48.225 #define SPDK_CONFIG_VFIO_USER 1 00:08:48.225 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:48.225 #define SPDK_CONFIG_VHOST 1 00:08:48.225 #define SPDK_CONFIG_VIRTIO 1 00:08:48.225 #undef SPDK_CONFIG_VTUNE 00:08:48.225 #define SPDK_CONFIG_VTUNE_DIR 00:08:48.225 #define SPDK_CONFIG_WERROR 1 00:08:48.225 #define SPDK_CONFIG_WPDK_DIR 00:08:48.225 #undef SPDK_CONFIG_XNVME 00:08:48.225 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:48.225 13:50:39 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:48.225 13:50:39 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:48.225 13:50:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.225 13:50:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.225 13:50:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.225 13:50:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.225 13:50:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.225 13:50:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.225 13:50:39 -- paths/export.sh@5 -- # export PATH 00:08:48.225 13:50:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.225 13:50:39 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:48.225 13:50:39 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:48.225 13:50:39 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:48.225 13:50:39 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:48.225 13:50:39 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:48.225 13:50:39 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:48.225 13:50:39 -- pm/common@16 -- # TEST_TAG=N/A 00:08:48.225 13:50:39 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:48.225 13:50:39 -- common/autotest_common.sh@52 -- # : 1 00:08:48.225 13:50:39 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:48.225 13:50:39 -- common/autotest_common.sh@56 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:48.225 13:50:39 -- common/autotest_common.sh@58 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:48.225 13:50:39 -- common/autotest_common.sh@60 -- # : 1 00:08:48.225 13:50:39 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:48.225 13:50:39 -- common/autotest_common.sh@62 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:48.225 13:50:39 -- common/autotest_common.sh@64 -- # : 00:08:48.225 13:50:39 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:48.225 13:50:39 -- common/autotest_common.sh@66 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:48.225 13:50:39 -- common/autotest_common.sh@68 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:48.225 13:50:39 -- common/autotest_common.sh@70 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:48.225 13:50:39 -- common/autotest_common.sh@72 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:48.225 13:50:39 -- common/autotest_common.sh@74 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:48.225 13:50:39 -- common/autotest_common.sh@76 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:48.225 13:50:39 -- common/autotest_common.sh@78 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:48.225 13:50:39 -- common/autotest_common.sh@80 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:48.225 13:50:39 -- common/autotest_common.sh@82 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:48.225 13:50:39 -- common/autotest_common.sh@84 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:48.225 13:50:39 -- common/autotest_common.sh@86 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:48.225 13:50:39 -- common/autotest_common.sh@88 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:48.225 13:50:39 -- common/autotest_common.sh@90 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:48.225 13:50:39 -- common/autotest_common.sh@92 -- # : 1 00:08:48.225 13:50:39 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:48.225 13:50:39 -- common/autotest_common.sh@94 -- # : 1 00:08:48.225 13:50:39 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:48.225 13:50:39 -- common/autotest_common.sh@96 -- # : rdma 00:08:48.225 13:50:39 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:48.225 13:50:39 -- common/autotest_common.sh@98 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:48.225 13:50:39 -- common/autotest_common.sh@100 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:48.225 13:50:39 -- common/autotest_common.sh@102 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:48.225 13:50:39 -- common/autotest_common.sh@104 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:48.225 13:50:39 -- common/autotest_common.sh@106 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:48.225 13:50:39 -- common/autotest_common.sh@108 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:48.225 13:50:39 -- common/autotest_common.sh@110 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:48.225 13:50:39 -- common/autotest_common.sh@112 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:48.225 13:50:39 -- common/autotest_common.sh@114 -- # : 0 00:08:48.225 13:50:39 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:48.225 13:50:39 -- common/autotest_common.sh@116 -- # : 1 00:08:48.225 13:50:39 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:48.225 13:50:39 -- common/autotest_common.sh@118 -- # : 00:08:48.226 13:50:39 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:48.226 13:50:39 -- common/autotest_common.sh@120 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:48.226 13:50:39 -- common/autotest_common.sh@122 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:48.226 13:50:39 -- common/autotest_common.sh@124 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:48.226 13:50:39 -- common/autotest_common.sh@126 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:48.226 13:50:39 -- common/autotest_common.sh@128 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:48.226 13:50:39 -- common/autotest_common.sh@130 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:48.226 13:50:39 -- common/autotest_common.sh@132 -- # : 00:08:48.226 13:50:39 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:48.226 13:50:39 -- common/autotest_common.sh@134 -- # : true 00:08:48.226 13:50:39 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:48.226 13:50:39 -- common/autotest_common.sh@136 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:48.226 13:50:39 -- common/autotest_common.sh@138 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:48.226 13:50:39 -- common/autotest_common.sh@140 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:48.226 13:50:39 -- common/autotest_common.sh@142 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:48.226 13:50:39 -- common/autotest_common.sh@144 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:48.226 13:50:39 -- common/autotest_common.sh@146 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:48.226 13:50:39 -- common/autotest_common.sh@148 -- # : 00:08:48.226 13:50:39 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:48.226 13:50:39 -- common/autotest_common.sh@150 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:48.226 13:50:39 -- common/autotest_common.sh@152 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:48.226 13:50:39 -- common/autotest_common.sh@154 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:48.226 13:50:39 -- common/autotest_common.sh@156 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:48.226 13:50:39 -- common/autotest_common.sh@158 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:48.226 13:50:39 -- common/autotest_common.sh@160 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:48.226 13:50:39 -- common/autotest_common.sh@163 -- # : 00:08:48.226 13:50:39 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:48.226 13:50:39 -- common/autotest_common.sh@165 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:48.226 13:50:39 -- common/autotest_common.sh@167 -- # : 0 00:08:48.226 13:50:39 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:48.226 13:50:39 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:48.226 13:50:39 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:48.226 13:50:39 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:48.226 13:50:39 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:48.226 13:50:39 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:48.226 13:50:39 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:48.226 13:50:39 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:48.226 13:50:39 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:48.226 13:50:39 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:48.226 13:50:39 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:48.226 13:50:39 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:48.226 13:50:39 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:48.226 13:50:39 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:48.226 13:50:39 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:48.226 13:50:39 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:48.226 13:50:39 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:48.226 13:50:39 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:48.226 13:50:39 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:48.226 13:50:39 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:48.226 13:50:39 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:48.226 13:50:39 -- common/autotest_common.sh@196 -- # cat 00:08:48.226 13:50:39 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:48.226 13:50:39 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:48.226 13:50:39 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:48.226 13:50:39 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:48.226 13:50:39 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:48.226 13:50:39 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:48.226 13:50:39 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:48.226 13:50:39 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:48.226 13:50:39 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:48.226 13:50:39 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:48.226 13:50:39 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:48.226 13:50:39 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:48.226 13:50:39 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:48.226 13:50:39 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:48.226 13:50:39 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:48.226 13:50:39 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:48.226 13:50:39 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:48.226 13:50:39 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:48.226 13:50:39 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:48.226 13:50:39 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:48.226 13:50:39 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:48.226 13:50:39 -- common/autotest_common.sh@249 -- # valgrind= 00:08:48.226 13:50:39 -- common/autotest_common.sh@255 -- # uname -s 00:08:48.226 13:50:39 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:48.226 13:50:39 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:48.226 13:50:39 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:48.226 13:50:39 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:48.226 13:50:39 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:48.226 13:50:39 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:48.227 13:50:39 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:48.227 13:50:39 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j72 00:08:48.227 13:50:39 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:48.227 13:50:39 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:48.227 13:50:39 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:48.227 13:50:39 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:48.227 13:50:39 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:48.227 13:50:39 -- common/autotest_common.sh@309 -- # [[ -z 3908791 ]] 00:08:48.227 13:50:39 -- common/autotest_common.sh@309 -- # kill -0 3908791 00:08:48.227 13:50:39 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:48.227 13:50:39 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:48.227 13:50:39 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:48.227 13:50:39 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:48.227 13:50:39 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:48.227 13:50:39 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:48.227 13:50:39 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:48.227 13:50:39 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:48.227 13:50:39 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.pWRjoE 00:08:48.227 13:50:39 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:48.227 13:50:39 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:48.227 13:50:39 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:48.227 13:50:39 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.pWRjoE/tests/nvmf /tmp/spdk.pWRjoE 00:08:48.227 13:50:39 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:48.227 13:50:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:48.227 13:50:39 -- common/autotest_common.sh@318 -- # df -T 00:08:48.227 13:50:39 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:48.227 13:50:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:48.227 13:50:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=893108224 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:48.227 13:50:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=4391321600 00:08:48.227 13:50:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=82055532544 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=94508572672 00:08:48.227 13:50:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=12453040128 00:08:48.227 13:50:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=47200768000 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254286336 00:08:48.227 13:50:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:08:48.227 13:50:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=18895626240 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=18901716992 00:08:48.227 13:50:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=6090752 00:08:48.227 13:50:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=47253069824 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254286336 00:08:48.227 13:50:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=1216512 00:08:48.227 13:50:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:48.227 13:50:39 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450852352 00:08:48.227 13:50:39 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450856448 00:08:48.227 13:50:39 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:48.227 13:50:39 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:48.227 13:50:39 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:48.227 * Looking for test storage... 00:08:48.227 13:50:39 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:48.227 13:50:39 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:48.227 13:50:39 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:48.227 13:50:39 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:48.227 13:50:39 -- common/autotest_common.sh@363 -- # mount=/ 00:08:48.227 13:50:39 -- common/autotest_common.sh@365 -- # target_space=82055532544 00:08:48.227 13:50:39 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:48.227 13:50:39 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:48.227 13:50:39 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:48.227 13:50:39 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:48.227 13:50:39 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:48.227 13:50:39 -- common/autotest_common.sh@372 -- # new_size=14667632640 00:08:48.227 13:50:39 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:48.227 13:50:39 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:48.227 13:50:39 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:48.227 13:50:39 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:48.227 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:08:48.227 13:50:39 -- common/autotest_common.sh@380 -- # return 0 00:08:48.227 13:50:39 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:48.227 13:50:39 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:48.227 13:50:39 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:48.227 13:50:39 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:48.227 13:50:39 -- common/autotest_common.sh@1672 -- # true 00:08:48.227 13:50:39 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:48.227 13:50:39 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:48.227 13:50:39 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:48.227 13:50:39 -- common/autotest_common.sh@27 -- # exec 00:08:48.227 13:50:39 -- common/autotest_common.sh@29 -- # exec 00:08:48.227 13:50:39 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:48.227 13:50:39 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:48.227 13:50:39 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:48.227 13:50:39 -- common/autotest_common.sh@18 -- # set -x 00:08:48.227 13:50:39 -- nvmf/run.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:08:48.227 13:50:39 -- ../common.sh@8 -- # pids=() 00:08:48.227 13:50:39 -- nvmf/run.sh@55 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:48.227 13:50:39 -- nvmf/run.sh@56 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:08:48.227 13:50:39 -- nvmf/run.sh@56 -- # fuzz_num=25 00:08:48.227 13:50:39 -- nvmf/run.sh@57 -- # (( fuzz_num != 0 )) 00:08:48.227 13:50:39 -- nvmf/run.sh@59 -- # trap 'cleanup /tmp/llvm_fuzz*; exit 1' SIGINT SIGTERM EXIT 00:08:48.227 13:50:39 -- nvmf/run.sh@61 -- # mem_size=512 00:08:48.227 13:50:39 -- nvmf/run.sh@62 -- # [[ 1 -eq 1 ]] 00:08:48.227 13:50:39 -- nvmf/run.sh@63 -- # start_llvm_fuzz_short 25 1 00:08:48.227 13:50:39 -- ../common.sh@69 -- # local fuzz_num=25 00:08:48.227 13:50:39 -- ../common.sh@70 -- # local time=1 00:08:48.227 13:50:39 -- ../common.sh@72 -- # (( i = 0 )) 00:08:48.227 13:50:39 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:48.227 13:50:39 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:48.227 13:50:39 -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:08:48.227 13:50:39 -- nvmf/run.sh@24 -- # local timen=1 00:08:48.227 13:50:39 -- nvmf/run.sh@25 -- # local core=0x1 00:08:48.227 13:50:39 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:48.227 13:50:39 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:08:48.227 13:50:39 -- nvmf/run.sh@29 -- # printf %02d 0 00:08:48.227 13:50:39 -- nvmf/run.sh@29 -- # port=4400 00:08:48.227 13:50:39 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:48.227 13:50:39 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:08:48.227 13:50:39 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:48.227 13:50:39 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 -r /var/tmp/spdk0.sock 00:08:48.227 [2024-07-23 13:50:39.187117] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:48.227 [2024-07-23 13:50:39.187187] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908846 ] 00:08:48.227 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.794 [2024-07-23 13:50:39.557298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.794 [2024-07-23 13:50:39.654597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:48.794 [2024-07-23 13:50:39.654788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.794 [2024-07-23 13:50:39.717335] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.794 [2024-07-23 13:50:39.733561] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:08:48.794 INFO: Running with entropic power schedule (0xFF, 100). 00:08:48.794 INFO: Seed: 2746451237 00:08:48.794 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:08:48.794 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:08:48.794 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:08:48.794 INFO: A corpus is not provided, starting from an empty corpus 00:08:48.794 #2 INITED exec/s: 0 rss: 61Mb 00:08:48.794 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:48.794 This may also happen if the target rejected all inputs we tried so far 00:08:48.794 [2024-07-23 13:50:39.789137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:48.794 [2024-07-23 13:50:39.789175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.310 NEW_FUNC[1/669]: 0x480d10 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:08:49.310 NEW_FUNC[2/669]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:49.310 #4 NEW cov: 11455 ft: 11454 corp: 2/126b lim: 320 exec/s: 0 rss: 68Mb L: 125/125 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:49.310 [2024-07-23 13:50:40.260467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.310 [2024-07-23 13:50:40.260532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.310 #10 NEW cov: 11568 ft: 11944 corp: 3/252b lim: 320 exec/s: 0 rss: 68Mb L: 126/126 MS: 1 InsertByte- 00:08:49.310 [2024-07-23 13:50:40.320477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.310 [2024-07-23 13:50:40.320516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.567 #11 NEW cov: 11574 ft: 12186 corp: 4/377b lim: 320 exec/s: 0 rss: 68Mb L: 125/126 MS: 1 CrossOver- 00:08:49.567 [2024-07-23 13:50:40.370539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.567 [2024-07-23 13:50:40.370574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.567 #12 NEW cov: 11659 ft: 12437 corp: 5/502b lim: 320 exec/s: 0 rss: 68Mb L: 125/126 MS: 1 ChangeBit- 00:08:49.567 [2024-07-23 13:50:40.420736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.567 [2024-07-23 13:50:40.420770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.567 #13 NEW cov: 11659 ft: 12537 corp: 6/628b lim: 320 exec/s: 0 rss: 68Mb L: 126/126 MS: 1 ChangeBit- 00:08:49.567 [2024-07-23 13:50:40.481067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.568 [2024-07-23 13:50:40.481101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.568 [2024-07-23 13:50:40.481163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.568 [2024-07-23 13:50:40.481183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.568 #14 NEW cov: 11659 ft: 12790 corp: 7/793b lim: 320 exec/s: 0 rss: 68Mb L: 165/165 MS: 1 CopyPart- 00:08:49.568 [2024-07-23 13:50:40.541043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.568 [2024-07-23 13:50:40.541077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.568 #15 NEW cov: 11659 ft: 12894 corp: 8/919b lim: 320 exec/s: 0 rss: 69Mb L: 126/165 MS: 1 InsertByte- 00:08:49.826 [2024-07-23 13:50:40.591367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.826 [2024-07-23 13:50:40.591400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.826 [2024-07-23 13:50:40.591467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.826 [2024-07-23 13:50:40.591486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.826 #16 NEW cov: 11659 ft: 12939 corp: 9/1079b lim: 320 exec/s: 0 rss: 69Mb L: 160/165 MS: 1 EraseBytes- 00:08:49.826 [2024-07-23 13:50:40.651369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.826 [2024-07-23 13:50:40.651402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.826 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:49.826 #17 NEW cov: 11682 ft: 13046 corp: 10/1204b lim: 320 exec/s: 0 rss: 69Mb L: 125/165 MS: 1 CMP- DE: "\017\000\000\000"- 00:08:49.826 [2024-07-23 13:50:40.711675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.826 [2024-07-23 13:50:40.711708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.826 [2024-07-23 13:50:40.711770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.826 [2024-07-23 13:50:40.711789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.826 #18 NEW cov: 11682 ft: 13087 corp: 11/1369b lim: 320 exec/s: 0 rss: 69Mb L: 165/165 MS: 1 ChangeBit- 00:08:49.826 [2024-07-23 13:50:40.761961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.826 [2024-07-23 13:50:40.761995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.826 [2024-07-23 13:50:40.762064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.826 [2024-07-23 13:50:40.762082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.826 [2024-07-23 13:50:40.762149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.826 [2024-07-23 13:50:40.762167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:49.826 #24 NEW cov: 11682 ft: 13257 corp: 12/1572b lim: 320 exec/s: 24 rss: 69Mb L: 203/203 MS: 1 InsertRepeatedBytes- 00:08:49.826 [2024-07-23 13:50:40.822172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.826 [2024-07-23 13:50:40.822205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:49.826 [2024-07-23 13:50:40.822279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.826 [2024-07-23 13:50:40.822298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:49.826 [2024-07-23 13:50:40.822363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:49.826 [2024-07-23 13:50:40.822382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.085 #25 NEW cov: 11682 ft: 13273 corp: 13/1775b lim: 320 exec/s: 25 rss: 69Mb L: 203/203 MS: 1 ChangeBit- 00:08:50.085 [2024-07-23 13:50:40.882057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.085 [2024-07-23 13:50:40.882089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.085 #26 NEW cov: 11682 ft: 13289 corp: 14/1902b lim: 320 exec/s: 26 rss: 69Mb L: 127/203 MS: 1 InsertByte- 00:08:50.085 [2024-07-23 13:50:40.932179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.085 [2024-07-23 13:50:40.932217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.085 #27 NEW cov: 11682 ft: 13336 corp: 15/2027b lim: 320 exec/s: 27 rss: 69Mb L: 125/203 MS: 1 ChangeBit- 00:08:50.085 [2024-07-23 13:50:40.972342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.085 [2024-07-23 13:50:40.972374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.085 #28 NEW cov: 11682 ft: 13346 corp: 16/2154b lim: 320 exec/s: 28 rss: 69Mb L: 127/203 MS: 1 ChangeBit- 00:08:50.085 [2024-07-23 13:50:41.032527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.085 [2024-07-23 13:50:41.032560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.085 #29 NEW cov: 11682 ft: 13348 corp: 17/2281b lim: 320 exec/s: 29 rss: 69Mb L: 127/203 MS: 1 InsertByte- 00:08:50.085 [2024-07-23 13:50:41.072863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.085 [2024-07-23 13:50:41.072895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.085 [2024-07-23 13:50:41.072967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.085 [2024-07-23 13:50:41.072986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.085 [2024-07-23 13:50:41.073053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.085 [2024-07-23 13:50:41.073073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:50.343 #30 NEW cov: 11682 ft: 13401 corp: 18/2484b lim: 320 exec/s: 30 rss: 69Mb L: 203/203 MS: 1 ChangeByte- 00:08:50.343 [2024-07-23 13:50:41.132923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.343 [2024-07-23 13:50:41.132956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.343 [2024-07-23 13:50:41.133019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.343 [2024-07-23 13:50:41.133038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.343 #31 NEW cov: 11682 ft: 13448 corp: 19/2642b lim: 320 exec/s: 31 rss: 69Mb L: 158/203 MS: 1 CopyPart- 00:08:50.343 [2024-07-23 13:50:41.182967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.343 [2024-07-23 13:50:41.183000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.343 #32 NEW cov: 11682 ft: 13497 corp: 20/2768b lim: 320 exec/s: 32 rss: 69Mb L: 126/203 MS: 1 ChangeByte- 00:08:50.343 [2024-07-23 13:50:41.243140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.343 [2024-07-23 13:50:41.243173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.343 #38 NEW cov: 11682 ft: 13527 corp: 21/2884b lim: 320 exec/s: 38 rss: 69Mb L: 116/203 MS: 1 EraseBytes- 00:08:50.343 [2024-07-23 13:50:41.293222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.343 [2024-07-23 13:50:41.293255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.343 #39 NEW cov: 11682 ft: 13565 corp: 22/3004b lim: 320 exec/s: 39 rss: 69Mb L: 120/203 MS: 1 EraseBytes- 00:08:50.343 [2024-07-23 13:50:41.353454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.343 [2024-07-23 13:50:41.353488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.601 #40 NEW cov: 11682 ft: 13580 corp: 23/3129b lim: 320 exec/s: 40 rss: 69Mb L: 125/203 MS: 1 ShuffleBytes- 00:08:50.601 [2024-07-23 13:50:41.393561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:f9000000 cdw11:00ffffff 00:08:50.601 [2024-07-23 13:50:41.393595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.601 #41 NEW cov: 11682 ft: 13610 corp: 24/3255b lim: 320 exec/s: 41 rss: 69Mb L: 126/203 MS: 1 ChangeBinInt- 00:08:50.601 [2024-07-23 13:50:41.453871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.601 [2024-07-23 13:50:41.453906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.601 [2024-07-23 13:50:41.453972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.601 [2024-07-23 13:50:41.453992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.601 #42 NEW cov: 11682 ft: 13624 corp: 25/3383b lim: 320 exec/s: 42 rss: 70Mb L: 128/203 MS: 1 InsertByte- 00:08:50.601 [2024-07-23 13:50:41.493987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.601 [2024-07-23 13:50:41.494021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.601 [2024-07-23 13:50:41.494087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff 00:08:50.601 [2024-07-23 13:50:41.494106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.601 #43 NEW cov: 11682 ft: 13643 corp: 26/3555b lim: 320 exec/s: 43 rss: 70Mb L: 172/203 MS: 1 InsertRepeatedBytes- 00:08:50.601 [2024-07-23 13:50:41.554144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.601 [2024-07-23 13:50:41.554178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.601 [2024-07-23 13:50:41.554253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.601 [2024-07-23 13:50:41.554274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.601 #49 NEW cov: 11682 ft: 13678 corp: 27/3735b lim: 320 exec/s: 49 rss: 70Mb L: 180/203 MS: 1 CopyPart- 00:08:50.602 [2024-07-23 13:50:41.614172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.602 [2024-07-23 13:50:41.614205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.860 #50 NEW cov: 11682 ft: 13701 corp: 28/3860b lim: 320 exec/s: 50 rss: 70Mb L: 125/203 MS: 1 CopyPart- 00:08:50.860 [2024-07-23 13:50:41.664357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.860 [2024-07-23 13:50:41.664390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.860 #51 NEW cov: 11682 ft: 13714 corp: 29/3985b lim: 320 exec/s: 51 rss: 70Mb L: 125/203 MS: 1 ChangeBit- 00:08:50.860 [2024-07-23 13:50:41.704406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.860 [2024-07-23 13:50:41.704439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.860 #52 NEW cov: 11682 ft: 13721 corp: 30/4111b lim: 320 exec/s: 52 rss: 70Mb L: 126/203 MS: 1 CopyPart- 00:08:50.860 [2024-07-23 13:50:41.754754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.860 [2024-07-23 13:50:41.754787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:50.860 [2024-07-23 13:50:41.754850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:08:50.860 [2024-07-23 13:50:41.754869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:50.860 #53 NEW cov: 11682 ft: 13756 corp: 31/4260b lim: 320 exec/s: 26 rss: 70Mb L: 149/203 MS: 1 EraseBytes- 00:08:50.860 #53 DONE cov: 11682 ft: 13756 corp: 31/4260b lim: 320 exec/s: 26 rss: 70Mb 00:08:50.860 ###### Recommended dictionary. ###### 00:08:50.860 "\017\000\000\000" # Uses: 0 00:08:50.860 ###### End of recommended dictionary. ###### 00:08:50.860 Done 53 runs in 2 second(s) 00:08:51.119 13:50:41 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_0.conf 00:08:51.119 13:50:41 -- ../common.sh@72 -- # (( i++ )) 00:08:51.119 13:50:41 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:51.119 13:50:41 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:51.119 13:50:41 -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:08:51.119 13:50:41 -- nvmf/run.sh@24 -- # local timen=1 00:08:51.119 13:50:41 -- nvmf/run.sh@25 -- # local core=0x1 00:08:51.119 13:50:41 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:51.119 13:50:41 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:08:51.119 13:50:41 -- nvmf/run.sh@29 -- # printf %02d 1 00:08:51.119 13:50:41 -- nvmf/run.sh@29 -- # port=4401 00:08:51.119 13:50:41 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:51.119 13:50:41 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:08:51.119 13:50:41 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:51.119 13:50:41 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 -r /var/tmp/spdk1.sock 00:08:51.119 [2024-07-23 13:50:41.996685] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:51.119 [2024-07-23 13:50:41.996779] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909214 ] 00:08:51.119 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.377 [2024-07-23 13:50:42.366230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.635 [2024-07-23 13:50:42.460395] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:51.635 [2024-07-23 13:50:42.460584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.635 [2024-07-23 13:50:42.523230] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.635 [2024-07-23 13:50:42.539451] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:08:51.635 INFO: Running with entropic power schedule (0xFF, 100). 00:08:51.635 INFO: Seed: 1256493280 00:08:51.635 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:08:51.635 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:08:51.635 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:08:51.635 INFO: A corpus is not provided, starting from an empty corpus 00:08:51.635 #2 INITED exec/s: 0 rss: 61Mb 00:08:51.635 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:51.635 This may also happen if the target rejected all inputs we tried so far 00:08:51.635 [2024-07-23 13:50:42.584713] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:51.635 [2024-07-23 13:50:42.584961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:51.635 [2024-07-23 13:50:42.584993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.201 NEW_FUNC[1/668]: 0x481610 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:08:52.201 NEW_FUNC[2/668]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:52.201 #16 NEW cov: 11529 ft: 11530 corp: 2/12b lim: 30 exec/s: 0 rss: 68Mb L: 11/11 MS: 4 InsertByte-ChangeBit-CrossOver-InsertRepeatedBytes- 00:08:52.201 [2024-07-23 13:50:43.045911] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.201 [2024-07-23 13:50:43.046065] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000d0a 00:08:52.201 [2024-07-23 13:50:43.046306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3f6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.046339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.201 [2024-07-23 13:50:43.046402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.046417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.201 NEW_FUNC[1/3]: 0x1537870 in nvme_ctrlr_process_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3790 00:08:52.201 NEW_FUNC[2/3]: 0x17046e0 in spdk_nvme_probe_poll_async /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme.c:1507 00:08:52.201 #17 NEW cov: 11666 ft: 12357 corp: 3/24b lim: 30 exec/s: 0 rss: 69Mb L: 12/12 MS: 1 InsertByte- 00:08:52.201 [2024-07-23 13:50:43.096012] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:52.201 [2024-07-23 13:50:43.096146] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:52.201 [2024-07-23 13:50:43.096276] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:52.201 [2024-07-23 13:50:43.096395] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:52.201 [2024-07-23 13:50:43.096648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.096675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.201 [2024-07-23 13:50:43.096729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.096743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.201 [2024-07-23 13:50:43.096798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.096812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.201 [2024-07-23 13:50:43.096867] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.096881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.201 #29 NEW cov: 11672 ft: 13151 corp: 4/51b lim: 30 exec/s: 0 rss: 69Mb L: 27/27 MS: 2 CopyPart-InsertRepeatedBytes- 00:08:52.201 [2024-07-23 13:50:43.135996] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.201 [2024-07-23 13:50:43.136225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.136251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.201 #30 NEW cov: 11757 ft: 13434 corp: 5/61b lim: 30 exec/s: 0 rss: 69Mb L: 10/27 MS: 1 EraseBytes- 00:08:52.201 [2024-07-23 13:50:43.176201] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:52.201 [2024-07-23 13:50:43.176352] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:52.201 [2024-07-23 13:50:43.176476] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:52.201 [2024-07-23 13:50:43.176601] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.201 [2024-07-23 13:50:43.176832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.176857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.201 [2024-07-23 13:50:43.176919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.176933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.201 [2024-07-23 13:50:43.176987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.177001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.201 [2024-07-23 13:50:43.177059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.177072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.201 #31 NEW cov: 11757 ft: 13464 corp: 6/88b lim: 30 exec/s: 0 rss: 69Mb L: 27/27 MS: 1 CrossOver- 00:08:52.201 [2024-07-23 13:50:43.216239] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.201 [2024-07-23 13:50:43.216367] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000d0a 00:08:52.201 [2024-07-23 13:50:43.216598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.216624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.201 [2024-07-23 13:50:43.216678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.201 [2024-07-23 13:50:43.216692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.460 #32 NEW cov: 11757 ft: 13504 corp: 7/100b lim: 30 exec/s: 0 rss: 69Mb L: 12/27 MS: 1 CopyPart- 00:08:52.460 [2024-07-23 13:50:43.256408] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.460 [2024-07-23 13:50:43.256539] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (109604) > buf size (4096) 00:08:52.460 [2024-07-23 13:50:43.256766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3f6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.461 [2024-07-23 13:50:43.256792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.461 [2024-07-23 13:50:43.256850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b080008 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.461 [2024-07-23 13:50:43.256865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.461 #33 NEW cov: 11780 ft: 13588 corp: 8/117b lim: 30 exec/s: 0 rss: 69Mb L: 17/27 MS: 1 InsertRepeatedBytes- 00:08:52.461 [2024-07-23 13:50:43.296514] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.461 [2024-07-23 13:50:43.296640] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000d0a 00:08:52.461 [2024-07-23 13:50:43.296864] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b8324 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.461 [2024-07-23 13:50:43.296890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.461 [2024-07-23 13:50:43.296949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.461 [2024-07-23 13:50:43.296966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.461 #34 NEW cov: 11780 ft: 13616 corp: 9/129b lim: 30 exec/s: 0 rss: 69Mb L: 12/27 MS: 1 InsertByte- 00:08:52.461 [2024-07-23 13:50:43.336567] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.461 [2024-07-23 13:50:43.336808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.461 [2024-07-23 13:50:43.336834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.461 #35 NEW cov: 11780 ft: 13679 corp: 10/139b lim: 30 exec/s: 0 rss: 69Mb L: 10/27 MS: 1 ChangeBit- 00:08:52.461 [2024-07-23 13:50:43.376699] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.461 [2024-07-23 13:50:43.376944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.461 [2024-07-23 13:50:43.376971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.461 #36 NEW cov: 11780 ft: 13810 corp: 11/150b lim: 30 exec/s: 0 rss: 69Mb L: 11/27 MS: 1 InsertByte- 00:08:52.461 [2024-07-23 13:50:43.416854] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.461 [2024-07-23 13:50:43.416999] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11 00:08:52.461 [2024-07-23 13:50:43.417231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3f6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.461 [2024-07-23 13:50:43.417258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.461 [2024-07-23 13:50:43.417313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.461 [2024-07-23 13:50:43.417329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.461 #37 NEW cov: 11780 ft: 13888 corp: 12/167b lim: 30 exec/s: 0 rss: 69Mb L: 17/27 MS: 1 ChangeBinInt- 00:08:52.461 [2024-07-23 13:50:43.456952] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b62 00:08:52.461 [2024-07-23 13:50:43.457182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.461 [2024-07-23 13:50:43.457207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.461 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:52.461 #38 NEW cov: 11803 ft: 13942 corp: 13/178b lim: 30 exec/s: 0 rss: 69Mb L: 11/27 MS: 1 ChangeBinInt- 00:08:52.720 [2024-07-23 13:50:43.497150] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:52.720 [2024-07-23 13:50:43.497300] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:52.720 [2024-07-23 13:50:43.497432] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffe9 00:08:52.720 [2024-07-23 13:50:43.497566] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.720 [2024-07-23 13:50:43.497801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.497827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.720 [2024-07-23 13:50:43.497884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.497898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.720 [2024-07-23 13:50:43.497955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.497973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.720 [2024-07-23 13:50:43.498030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.498044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:52.720 #39 NEW cov: 11803 ft: 14006 corp: 14/206b lim: 30 exec/s: 0 rss: 70Mb L: 28/28 MS: 1 InsertByte- 00:08:52.720 [2024-07-23 13:50:43.547229] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.720 [2024-07-23 13:50:43.547502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.547529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.720 #40 NEW cov: 11803 ft: 14009 corp: 15/217b lim: 30 exec/s: 40 rss: 70Mb L: 11/28 MS: 1 CrossOver- 00:08:52.720 [2024-07-23 13:50:43.587318] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300007b6b 00:08:52.720 [2024-07-23 13:50:43.587451] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000d0a 00:08:52.720 [2024-07-23 13:50:43.587691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.587716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.720 [2024-07-23 13:50:43.587773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.587787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.720 #41 NEW cov: 11803 ft: 14093 corp: 16/229b lim: 30 exec/s: 41 rss: 70Mb L: 12/28 MS: 1 ChangeBit- 00:08:52.720 [2024-07-23 13:50:43.627470] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.720 [2024-07-23 13:50:43.627603] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000d0a 00:08:52.720 [2024-07-23 13:50:43.627725] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.720 [2024-07-23 13:50:43.627956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.627982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.720 [2024-07-23 13:50:43.628037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.628052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.720 [2024-07-23 13:50:43.628106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.628120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.720 #42 NEW cov: 11803 ft: 14337 corp: 17/250b lim: 30 exec/s: 42 rss: 70Mb L: 21/28 MS: 1 CopyPart- 00:08:52.720 [2024-07-23 13:50:43.667541] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.720 [2024-07-23 13:50:43.667676] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11 00:08:52.720 [2024-07-23 13:50:43.667906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3f6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.667935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.720 [2024-07-23 13:50:43.667989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.668004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.720 #43 NEW cov: 11803 ft: 14370 corp: 18/267b lim: 30 exec/s: 43 rss: 70Mb L: 17/28 MS: 1 ShuffleBytes- 00:08:52.720 [2024-07-23 13:50:43.707602] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.720 [2024-07-23 13:50:43.707830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b8324 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.720 [2024-07-23 13:50:43.707855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.720 #44 NEW cov: 11803 ft: 14398 corp: 19/275b lim: 30 exec/s: 44 rss: 70Mb L: 8/28 MS: 1 EraseBytes- 00:08:52.979 [2024-07-23 13:50:43.747725] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b0d 00:08:52.979 [2024-07-23 13:50:43.747966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.747991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.979 #45 NEW cov: 11803 ft: 14410 corp: 20/282b lim: 30 exec/s: 45 rss: 70Mb L: 7/28 MS: 1 EraseBytes- 00:08:52.979 [2024-07-23 13:50:43.787871] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.979 [2024-07-23 13:50:43.788010] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000d0a 00:08:52.979 [2024-07-23 13:50:43.788223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3f6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.788249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.979 [2024-07-23 13:50:43.788320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.788335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.979 #46 NEW cov: 11803 ft: 14447 corp: 21/294b lim: 30 exec/s: 46 rss: 70Mb L: 12/28 MS: 1 CopyPart- 00:08:52.979 [2024-07-23 13:50:43.828045] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.979 [2024-07-23 13:50:43.828184] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (110000) > buf size (4096) 00:08:52.979 [2024-07-23 13:50:43.828310] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6b 00:08:52.979 [2024-07-23 13:50:43.828540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b8324 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.828566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.979 [2024-07-23 13:50:43.828622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.828637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.979 [2024-07-23 13:50:43.828690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.828704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.979 #47 NEW cov: 11803 ft: 14466 corp: 22/315b lim: 30 exec/s: 47 rss: 70Mb L: 21/28 MS: 1 InsertRepeatedBytes- 00:08:52.979 [2024-07-23 13:50:43.868133] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:52.979 [2024-07-23 13:50:43.868281] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (110000) > buf size (4096) 00:08:52.979 [2024-07-23 13:50:43.868406] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6b 00:08:52.979 [2024-07-23 13:50:43.868645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b8324 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.868671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.979 [2024-07-23 13:50:43.868728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.868743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.979 [2024-07-23 13:50:43.868802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.868816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.979 #48 NEW cov: 11803 ft: 14507 corp: 23/337b lim: 30 exec/s: 48 rss: 70Mb L: 22/28 MS: 1 InsertByte- 00:08:52.979 [2024-07-23 13:50:43.908142] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b0d 00:08:52.979 [2024-07-23 13:50:43.908385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.908410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.979 #49 NEW cov: 11803 ft: 14532 corp: 24/344b lim: 30 exec/s: 49 rss: 70Mb L: 7/28 MS: 1 ShuffleBytes- 00:08:52.979 [2024-07-23 13:50:43.948344] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6b6b 00:08:52.979 [2024-07-23 13:50:43.948492] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (110000) > buf size (4096) 00:08:52.979 [2024-07-23 13:50:43.948833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b7e006b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.948859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:52.979 [2024-07-23 13:50:43.948915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b006b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.948930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:52.979 [2024-07-23 13:50:43.948986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.949000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:52.979 #50 NEW cov: 11820 ft: 14606 corp: 25/367b lim: 30 exec/s: 50 rss: 70Mb L: 23/28 MS: 1 InsertByte- 00:08:52.979 [2024-07-23 13:50:43.988426] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b62 00:08:52.979 [2024-07-23 13:50:43.988666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:52.979 [2024-07-23 13:50:43.988692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.238 #51 NEW cov: 11820 ft: 14610 corp: 26/378b lim: 30 exec/s: 51 rss: 70Mb L: 11/28 MS: 1 CopyPart- 00:08:53.238 [2024-07-23 13:50:44.028666] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:53.238 [2024-07-23 13:50:44.028808] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:53.238 [2024-07-23 13:50:44.028932] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:53.238 [2024-07-23 13:50:44.029054] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.238 [2024-07-23 13:50:44.029287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.029314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.029380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.029394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.029447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.029461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.029513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.029526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.238 #52 NEW cov: 11820 ft: 14616 corp: 27/407b lim: 30 exec/s: 52 rss: 70Mb L: 29/29 MS: 1 CopyPart- 00:08:53.238 [2024-07-23 13:50:44.068692] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000cf6b 00:08:53.238 [2024-07-23 13:50:44.068835] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b0d 00:08:53.238 [2024-07-23 13:50:44.069080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b8324 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.069104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.069158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.069171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.238 #53 NEW cov: 11820 ft: 14620 corp: 28/420b lim: 30 exec/s: 53 rss: 70Mb L: 13/29 MS: 1 InsertByte- 00:08:53.238 [2024-07-23 13:50:44.108848] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.238 [2024-07-23 13:50:44.108980] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (110000) > buf size (4096) 00:08:53.238 [2024-07-23 13:50:44.109337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b8324 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.109362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.109418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.109432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.109487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00b70000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.109507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.238 #54 NEW cov: 11820 ft: 14648 corp: 29/443b lim: 30 exec/s: 54 rss: 70Mb L: 23/29 MS: 1 InsertByte- 00:08:53.238 [2024-07-23 13:50:44.148898] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.238 [2024-07-23 13:50:44.149145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.149171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.238 #55 NEW cov: 11820 ft: 14669 corp: 30/453b lim: 30 exec/s: 55 rss: 70Mb L: 10/29 MS: 1 ShuffleBytes- 00:08:53.238 [2024-07-23 13:50:44.189114] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.238 [2024-07-23 13:50:44.189267] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (110000) > buf size (4096) 00:08:53.238 [2024-07-23 13:50:44.189399] ctrlr.c:2547:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (23808) > len (736) 00:08:53.238 [2024-07-23 13:50:44.189522] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000380a 00:08:53.238 [2024-07-23 13:50:44.189767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b8324 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.189793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.189848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.189862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.189915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00b70000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.189929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.189982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:006b816b cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.189996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.238 #56 NEW cov: 11826 ft: 14683 corp: 31/477b lim: 30 exec/s: 56 rss: 70Mb L: 24/29 MS: 1 InsertByte- 00:08:53.238 [2024-07-23 13:50:44.239309] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.238 [2024-07-23 13:50:44.239453] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000d0a 00:08:53.238 [2024-07-23 13:50:44.239577] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (896432) > buf size (4096) 00:08:53.238 [2024-07-23 13:50:44.239699] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000f66a 00:08:53.238 [2024-07-23 13:50:44.239951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.239976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.240029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.240043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.240095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.240111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.238 [2024-07-23 13:50:44.240164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:36e70217 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.238 [2024-07-23 13:50:44.240178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.497 #57 NEW cov: 11826 ft: 14692 corp: 32/506b lim: 30 exec/s: 57 rss: 71Mb L: 29/29 MS: 1 CMP- DE: "\377,6\347\027\316\366j"- 00:08:53.497 [2024-07-23 13:50:44.289450] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.497 [2024-07-23 13:50:44.289579] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000246b 00:08:53.497 [2024-07-23 13:50:44.289699] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.497 [2024-07-23 13:50:44.289819] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000d0a 00:08:53.497 [2024-07-23 13:50:44.290046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b8324 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.290072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.290130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.290145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.290199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.290216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.290285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0d0a836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.290299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.497 #58 NEW cov: 11826 ft: 14739 corp: 33/530b lim: 30 exec/s: 58 rss: 71Mb L: 24/29 MS: 1 CopyPart- 00:08:53.497 [2024-07-23 13:50:44.329591] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.497 [2024-07-23 13:50:44.329724] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6b6b 00:08:53.497 [2024-07-23 13:50:44.329848] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11 00:08:53.497 [2024-07-23 13:50:44.329971] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (372144) > buf size (4096) 00:08:53.497 [2024-07-23 13:50:44.330092] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000d0a 00:08:53.497 [2024-07-23 13:50:44.330342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3f6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.330368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.330420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.330434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.330487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6b080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.330501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.330557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6b6b816b cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.330571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.330623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:116b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.330637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:53.497 #59 NEW cov: 11826 ft: 14774 corp: 34/560b lim: 30 exec/s: 59 rss: 71Mb L: 30/30 MS: 1 CopyPart- 00:08:53.497 [2024-07-23 13:50:44.379725] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (110000) > buf size (4096) 00:08:53.497 [2024-07-23 13:50:44.379853] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.497 [2024-07-23 13:50:44.379974] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200006b6b 00:08:53.497 [2024-07-23 13:50:44.380095] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b0d 00:08:53.497 [2024-07-23 13:50:44.380336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.380360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.380417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.380432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.380488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:6b6b020d cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.380501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.380548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:6b6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.380561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.497 #60 NEW cov: 11826 ft: 14780 corp: 35/585b lim: 30 exec/s: 60 rss: 71Mb L: 25/30 MS: 1 InsertRepeatedBytes- 00:08:53.497 [2024-07-23 13:50:44.419706] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x6b6b 00:08:53.497 [2024-07-23 13:50:44.419948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6b6b006b cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.419974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.497 #61 NEW cov: 11826 ft: 14808 corp: 36/595b lim: 30 exec/s: 61 rss: 71Mb L: 10/30 MS: 1 ChangeBinInt- 00:08:53.497 [2024-07-23 13:50:44.459942] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:53.497 [2024-07-23 13:50:44.460074] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:53.497 [2024-07-23 13:50:44.460198] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:08:53.497 [2024-07-23 13:50:44.460333] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.497 [2024-07-23 13:50:44.460569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6bff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.460594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.460656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.460670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.460726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.460740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:53.497 [2024-07-23 13:50:44.460795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:e9ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.497 [2024-07-23 13:50:44.460809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:53.497 #62 NEW cov: 11826 ft: 14810 corp: 37/624b lim: 30 exec/s: 62 rss: 71Mb L: 29/30 MS: 1 CrossOver- 00:08:53.498 [2024-07-23 13:50:44.509988] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.498 [2024-07-23 13:50:44.510122] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11 00:08:53.498 [2024-07-23 13:50:44.510369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3f6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-07-23 13:50:44.510395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.498 [2024-07-23 13:50:44.510452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.498 [2024-07-23 13:50:44.510467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.756 #63 NEW cov: 11826 ft: 14814 corp: 38/641b lim: 30 exec/s: 63 rss: 71Mb L: 17/30 MS: 1 ChangeByte- 00:08:53.756 [2024-07-23 13:50:44.550118] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300002c36 00:08:53.756 [2024-07-23 13:50:44.550270] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x200006a11 00:08:53.756 [2024-07-23 13:50:44.550501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3f6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.756 [2024-07-23 13:50:44.550527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.756 [2024-07-23 13:50:44.550583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:e71702ce cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.756 [2024-07-23 13:50:44.550597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.756 #64 NEW cov: 11826 ft: 14832 corp: 39/658b lim: 30 exec/s: 64 rss: 71Mb L: 17/30 MS: 1 PersAutoDict- DE: "\377,6\347\027\316\366j"- 00:08:53.756 [2024-07-23 13:50:44.590307] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300006b6b 00:08:53.756 [2024-07-23 13:50:44.590435] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x11 00:08:53.756 [2024-07-23 13:50:44.590670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3f6b836b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.756 [2024-07-23 13:50:44.590695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:53.756 [2024-07-23 13:50:44.590748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:6b080000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.756 [2024-07-23 13:50:44.590762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:53.756 #65 NEW cov: 11826 ft: 14836 corp: 40/675b lim: 30 exec/s: 32 rss: 71Mb L: 17/30 MS: 1 ChangeBinInt- 00:08:53.756 #65 DONE cov: 11826 ft: 14836 corp: 40/675b lim: 30 exec/s: 32 rss: 71Mb 00:08:53.756 ###### Recommended dictionary. ###### 00:08:53.756 "\377,6\347\027\316\366j" # Uses: 1 00:08:53.756 ###### End of recommended dictionary. ###### 00:08:53.756 Done 65 runs in 2 second(s) 00:08:53.756 13:50:44 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_1.conf 00:08:53.756 13:50:44 -- ../common.sh@72 -- # (( i++ )) 00:08:53.756 13:50:44 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:53.756 13:50:44 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:53.756 13:50:44 -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:08:53.756 13:50:44 -- nvmf/run.sh@24 -- # local timen=1 00:08:53.756 13:50:44 -- nvmf/run.sh@25 -- # local core=0x1 00:08:53.756 13:50:44 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:53.756 13:50:44 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:08:54.015 13:50:44 -- nvmf/run.sh@29 -- # printf %02d 2 00:08:54.015 13:50:44 -- nvmf/run.sh@29 -- # port=4402 00:08:54.015 13:50:44 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:54.015 13:50:44 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:08:54.015 13:50:44 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:54.015 13:50:44 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 -r /var/tmp/spdk2.sock 00:08:54.015 [2024-07-23 13:50:44.817822] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:54.015 [2024-07-23 13:50:44.817899] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909588 ] 00:08:54.015 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.273 [2024-07-23 13:50:45.185626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.273 [2024-07-23 13:50:45.292600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:54.273 [2024-07-23 13:50:45.292785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.531 [2024-07-23 13:50:45.355365] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.531 [2024-07-23 13:50:45.371588] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:08:54.531 INFO: Running with entropic power schedule (0xFF, 100). 00:08:54.531 INFO: Seed: 4088493241 00:08:54.531 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:08:54.531 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:08:54.531 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:08:54.531 INFO: A corpus is not provided, starting from an empty corpus 00:08:54.531 #2 INITED exec/s: 0 rss: 61Mb 00:08:54.531 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:54.531 This may also happen if the target rejected all inputs we tried so far 00:08:54.531 [2024-07-23 13:50:45.417373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.531 [2024-07-23 13:50:45.417403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:54.531 [2024-07-23 13:50:45.417481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.531 [2024-07-23 13:50:45.417496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:54.531 [2024-07-23 13:50:45.417556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:54.531 [2024-07-23 13:50:45.417571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.097 NEW_FUNC[1/670]: 0x484030 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:08:55.097 NEW_FUNC[2/670]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:55.097 #3 NEW cov: 11511 ft: 11512 corp: 2/22b lim: 35 exec/s: 0 rss: 68Mb L: 21/21 MS: 1 InsertRepeatedBytes- 00:08:55.097 [2024-07-23 13:50:45.878590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.097 [2024-07-23 13:50:45.878636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.097 [2024-07-23 13:50:45.878695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.097 [2024-07-23 13:50:45.878709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.097 [2024-07-23 13:50:45.878766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.097 [2024-07-23 13:50:45.878781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.097 #10 NEW cov: 11624 ft: 11915 corp: 3/45b lim: 35 exec/s: 0 rss: 68Mb L: 23/23 MS: 2 ChangeByte-InsertRepeatedBytes- 00:08:55.097 [2024-07-23 13:50:45.918731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.097 [2024-07-23 13:50:45.918759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.097 [2024-07-23 13:50:45.918820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.097 [2024-07-23 13:50:45.918835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.097 [2024-07-23 13:50:45.918895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.097 [2024-07-23 13:50:45.918910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.097 [2024-07-23 13:50:45.918968] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.097 [2024-07-23 13:50:45.918983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.097 #16 NEW cov: 11630 ft: 12668 corp: 4/76b lim: 35 exec/s: 0 rss: 68Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:08:55.098 [2024-07-23 13:50:45.958865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:45.958892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.098 [2024-07-23 13:50:45.958953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:45.958968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.098 [2024-07-23 13:50:45.959034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:c700ffc7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:45.959054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.098 [2024-07-23 13:50:45.959139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:45.959157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.098 #17 NEW cov: 11715 ft: 13077 corp: 5/109b lim: 35 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 CrossOver- 00:08:55.098 [2024-07-23 13:50:46.008899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:46.008926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.098 [2024-07-23 13:50:46.008987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:46.009002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.098 [2024-07-23 13:50:46.009061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:46.009077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.098 #18 NEW cov: 11715 ft: 13186 corp: 6/132b lim: 35 exec/s: 0 rss: 69Mb L: 23/33 MS: 1 ChangeByte- 00:08:55.098 [2024-07-23 13:50:46.048948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:46.048974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.098 [2024-07-23 13:50:46.049037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:46.049052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.098 [2024-07-23 13:50:46.049112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0076 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:46.049127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.098 #19 NEW cov: 11715 ft: 13221 corp: 7/155b lim: 35 exec/s: 0 rss: 69Mb L: 23/33 MS: 1 ChangeByte- 00:08:55.098 [2024-07-23 13:50:46.089178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:46.089205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.098 [2024-07-23 13:50:46.089269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:46.089285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.098 [2024-07-23 13:50:46.089347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ff3e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.098 [2024-07-23 13:50:46.089363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.098 #20 NEW cov: 11715 ft: 13252 corp: 8/178b lim: 35 exec/s: 0 rss: 69Mb L: 23/33 MS: 1 ShuffleBytes- 00:08:55.356 [2024-07-23 13:50:46.129224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.129252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.129313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.129329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.129388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:32c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.129403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.356 #21 NEW cov: 11715 ft: 13277 corp: 9/200b lim: 35 exec/s: 0 rss: 69Mb L: 22/33 MS: 1 InsertByte- 00:08:55.356 [2024-07-23 13:50:46.169371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.169397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.169459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.169473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.169535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.169550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.356 #22 NEW cov: 11715 ft: 13331 corp: 10/223b lim: 35 exec/s: 0 rss: 69Mb L: 23/33 MS: 1 ShuffleBytes- 00:08:55.356 [2024-07-23 13:50:46.209519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.209545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.209607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.209622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.209691] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:c7ff00c7 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.209711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.356 #23 NEW cov: 11715 ft: 13342 corp: 11/248b lim: 35 exec/s: 0 rss: 69Mb L: 25/33 MS: 1 InsertRepeatedBytes- 00:08:55.356 [2024-07-23 13:50:46.249729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.249755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.249813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.249828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.249889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:32c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.249904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.249961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:c7c700c7 cdw11:c700c732 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.249975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.356 #24 NEW cov: 11715 ft: 13382 corp: 12/281b lim: 35 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 CopyPart- 00:08:55.356 [2024-07-23 13:50:46.289850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.289876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.289936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:76ff0076 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.289951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.290009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.290023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.290080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.290094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.356 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:55.356 #25 NEW cov: 11738 ft: 13432 corp: 13/315b lim: 35 exec/s: 0 rss: 69Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:08:55.356 [2024-07-23 13:50:46.340017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.356 [2024-07-23 13:50:46.340043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.356 [2024-07-23 13:50:46.340102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.357 [2024-07-23 13:50:46.340118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.357 [2024-07-23 13:50:46.340175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:32c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.357 [2024-07-23 13:50:46.340190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.357 [2024-07-23 13:50:46.340244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d7c700c7 cdw11:c700c732 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.357 [2024-07-23 13:50:46.340259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.357 #26 NEW cov: 11738 ft: 13467 corp: 14/348b lim: 35 exec/s: 0 rss: 69Mb L: 33/34 MS: 1 ChangeBit- 00:08:55.615 [2024-07-23 13:50:46.389939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.615 [2024-07-23 13:50:46.389965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.615 [2024-07-23 13:50:46.390032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.615 [2024-07-23 13:50:46.390047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.615 [2024-07-23 13:50:46.390108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0076 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.615 [2024-07-23 13:50:46.390123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.615 #27 NEW cov: 11738 ft: 13488 corp: 15/371b lim: 35 exec/s: 27 rss: 69Mb L: 23/34 MS: 1 CopyPart- 00:08:55.615 [2024-07-23 13:50:46.429804] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:55.615 [2024-07-23 13:50:46.430314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.615 [2024-07-23 13:50:46.430342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.615 [2024-07-23 13:50:46.430402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.615 [2024-07-23 13:50:46.430420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.615 [2024-07-23 13:50:46.430477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.615 [2024-07-23 13:50:46.430491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.615 [2024-07-23 13:50:46.430548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.615 [2024-07-23 13:50:46.430563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.615 #28 NEW cov: 11747 ft: 13517 corp: 16/405b lim: 35 exec/s: 28 rss: 69Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:08:55.615 [2024-07-23 13:50:46.470228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.615 [2024-07-23 13:50:46.470254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.615 [2024-07-23 13:50:46.470314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.615 [2024-07-23 13:50:46.470329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.615 [2024-07-23 13:50:46.470394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:33c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.615 [2024-07-23 13:50:46.470415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.615 #29 NEW cov: 11747 ft: 13530 corp: 17/427b lim: 35 exec/s: 29 rss: 69Mb L: 22/34 MS: 1 ChangeASCIIInt- 00:08:55.616 [2024-07-23 13:50:46.510054] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:55.616 [2024-07-23 13:50:46.510541] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-07-23 13:50:46.510568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.616 [2024-07-23 13:50:46.510630] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-07-23 13:50:46.510647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.616 [2024-07-23 13:50:46.510704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-07-23 13:50:46.510719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.616 [2024-07-23 13:50:46.510775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-07-23 13:50:46.510790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.616 #30 NEW cov: 11747 ft: 13557 corp: 18/461b lim: 35 exec/s: 30 rss: 70Mb L: 34/34 MS: 1 CrossOver- 00:08:55.616 [2024-07-23 13:50:46.550638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-07-23 13:50:46.550665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.616 [2024-07-23 13:50:46.550725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-07-23 13:50:46.550740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.616 [2024-07-23 13:50:46.550796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-07-23 13:50:46.550812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.616 [2024-07-23 13:50:46.550870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff003e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-07-23 13:50:46.550884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.616 #31 NEW cov: 11747 ft: 13574 corp: 19/489b lim: 35 exec/s: 31 rss: 70Mb L: 28/34 MS: 1 CopyPart- 00:08:55.616 [2024-07-23 13:50:46.590438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-07-23 13:50:46.590465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.616 [2024-07-23 13:50:46.590526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-07-23 13:50:46.590540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.616 #32 NEW cov: 11747 ft: 13814 corp: 20/509b lim: 35 exec/s: 32 rss: 70Mb L: 20/34 MS: 1 EraseBytes- 00:08:55.616 [2024-07-23 13:50:46.630430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ff0600ff cdw11:ff000aff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.616 [2024-07-23 13:50:46.630458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.874 #36 NEW cov: 11747 ft: 14213 corp: 21/517b lim: 35 exec/s: 36 rss: 70Mb L: 8/34 MS: 4 CrossOver-ChangeBinInt-EraseBytes-CopyPart- 00:08:55.874 [2024-07-23 13:50:46.670736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-07-23 13:50:46.670763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.874 [2024-07-23 13:50:46.670827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-07-23 13:50:46.670842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.874 #37 NEW cov: 11747 ft: 14232 corp: 22/535b lim: 35 exec/s: 37 rss: 70Mb L: 18/34 MS: 1 EraseBytes- 00:08:55.874 [2024-07-23 13:50:46.710947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-07-23 13:50:46.710974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.874 [2024-07-23 13:50:46.711035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:26ff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-07-23 13:50:46.711050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.874 [2024-07-23 13:50:46.711111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0076 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-07-23 13:50:46.711127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.874 #38 NEW cov: 11747 ft: 14264 corp: 23/558b lim: 35 exec/s: 38 rss: 70Mb L: 23/34 MS: 1 ChangeByte- 00:08:55.874 [2024-07-23 13:50:46.750771] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:55.874 [2024-07-23 13:50:46.751403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-07-23 13:50:46.751429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.874 [2024-07-23 13:50:46.751492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-07-23 13:50:46.751510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.874 [2024-07-23 13:50:46.751571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-07-23 13:50:46.751586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.874 [2024-07-23 13:50:46.751651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-07-23 13:50:46.751669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.874 [2024-07-23 13:50:46.751734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-07-23 13:50:46.751751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:55.874 #39 NEW cov: 11747 ft: 14342 corp: 24/593b lim: 35 exec/s: 39 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:08:55.874 [2024-07-23 13:50:46.791182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.874 [2024-07-23 13:50:46.791208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.875 [2024-07-23 13:50:46.791273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.875 [2024-07-23 13:50:46.791291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.875 [2024-07-23 13:50:46.791347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff006c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.875 [2024-07-23 13:50:46.791361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.875 #40 NEW cov: 11747 ft: 14361 corp: 25/616b lim: 35 exec/s: 40 rss: 70Mb L: 23/35 MS: 1 ChangeByte- 00:08:55.875 [2024-07-23 13:50:46.831336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c707000a cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.875 [2024-07-23 13:50:46.831363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.875 [2024-07-23 13:50:46.831427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.875 [2024-07-23 13:50:46.831443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.875 [2024-07-23 13:50:46.831508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.875 [2024-07-23 13:50:46.831524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.875 #41 NEW cov: 11747 ft: 14381 corp: 26/638b lim: 35 exec/s: 41 rss: 70Mb L: 22/35 MS: 1 InsertByte- 00:08:55.875 [2024-07-23 13:50:46.871566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.875 [2024-07-23 13:50:46.871593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:55.875 [2024-07-23 13:50:46.871654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.875 [2024-07-23 13:50:46.871669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:55.875 [2024-07-23 13:50:46.871727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.875 [2024-07-23 13:50:46.871742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:55.875 [2024-07-23 13:50:46.871799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.875 [2024-07-23 13:50:46.871813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:55.875 #42 NEW cov: 11747 ft: 14396 corp: 27/671b lim: 35 exec/s: 42 rss: 70Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:08:56.133 [2024-07-23 13:50:46.911595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.133 [2024-07-23 13:50:46.911622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.133 [2024-07-23 13:50:46.911683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.133 [2024-07-23 13:50:46.911698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.133 [2024-07-23 13:50:46.911758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.133 [2024-07-23 13:50:46.911777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.133 #43 NEW cov: 11747 ft: 14497 corp: 28/698b lim: 35 exec/s: 43 rss: 70Mb L: 27/35 MS: 1 CMP- DE: "\001\000\000\000"- 00:08:56.133 [2024-07-23 13:50:46.951830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.133 [2024-07-23 13:50:46.951857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.133 [2024-07-23 13:50:46.951917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.133 [2024-07-23 13:50:46.951931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.133 [2024-07-23 13:50:46.951986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.133 [2024-07-23 13:50:46.952001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.133 [2024-07-23 13:50:46.952060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff003e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.133 [2024-07-23 13:50:46.952074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.133 #44 NEW cov: 11747 ft: 14516 corp: 29/726b lim: 35 exec/s: 44 rss: 70Mb L: 28/35 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:08:56.133 [2024-07-23 13:50:46.991783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.133 [2024-07-23 13:50:46.991809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.133 [2024-07-23 13:50:46.991873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.133 [2024-07-23 13:50:46.991889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.133 [2024-07-23 13:50:46.991949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:33c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.133 [2024-07-23 13:50:46.991966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.133 #45 NEW cov: 11747 ft: 14543 corp: 30/748b lim: 35 exec/s: 45 rss: 70Mb L: 22/35 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:08:56.133 [2024-07-23 13:50:47.032014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff0041ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.133 [2024-07-23 13:50:47.032042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.134 [2024-07-23 13:50:47.032101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.134 [2024-07-23 13:50:47.032115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.134 [2024-07-23 13:50:47.032174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:6cff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.134 [2024-07-23 13:50:47.032195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.134 #46 NEW cov: 11747 ft: 14580 corp: 31/772b lim: 35 exec/s: 46 rss: 70Mb L: 24/35 MS: 1 InsertByte- 00:08:56.134 [2024-07-23 13:50:47.072025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.134 [2024-07-23 13:50:47.072056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.134 [2024-07-23 13:50:47.072118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.134 [2024-07-23 13:50:47.072133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.134 [2024-07-23 13:50:47.072192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0076 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.134 [2024-07-23 13:50:47.072208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.134 #47 NEW cov: 11747 ft: 14601 corp: 32/795b lim: 35 exec/s: 47 rss: 70Mb L: 23/35 MS: 1 ChangeBit- 00:08:56.134 [2024-07-23 13:50:47.112149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.134 [2024-07-23 13:50:47.112176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.134 [2024-07-23 13:50:47.112226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff002c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.134 [2024-07-23 13:50:47.112242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.134 [2024-07-23 13:50:47.112302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff006c cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.134 [2024-07-23 13:50:47.112316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.134 #48 NEW cov: 11747 ft: 14632 corp: 33/818b lim: 35 exec/s: 48 rss: 70Mb L: 23/35 MS: 1 ChangeByte- 00:08:56.134 [2024-07-23 13:50:47.152240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.134 [2024-07-23 13:50:47.152267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.134 [2024-07-23 13:50:47.152327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:26ff00ff cdw11:7600ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.134 [2024-07-23 13:50:47.152343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.134 [2024-07-23 13:50:47.152403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.134 [2024-07-23 13:50:47.152418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.392 #49 NEW cov: 11747 ft: 14642 corp: 34/841b lim: 35 exec/s: 49 rss: 70Mb L: 23/35 MS: 1 ShuffleBytes- 00:08:56.392 [2024-07-23 13:50:47.192157] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:56.392 [2024-07-23 13:50:47.192775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.392 [2024-07-23 13:50:47.192802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.392 [2024-07-23 13:50:47.192863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.392 [2024-07-23 13:50:47.192882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.392 [2024-07-23 13:50:47.192945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.392 [2024-07-23 13:50:47.192961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.392 [2024-07-23 13:50:47.193023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.392 [2024-07-23 13:50:47.193039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.392 [2024-07-23 13:50:47.193098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00efff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.392 [2024-07-23 13:50:47.193112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:56.392 #50 NEW cov: 11747 ft: 14663 corp: 35/876b lim: 35 exec/s: 50 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:08:56.393 [2024-07-23 13:50:47.242565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.242591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.242648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:26bf00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.242664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.242726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff0076 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.242742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.393 #51 NEW cov: 11747 ft: 14683 corp: 36/899b lim: 35 exec/s: 51 rss: 70Mb L: 23/35 MS: 1 ChangeBit- 00:08:56.393 [2024-07-23 13:50:47.282960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.282985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.283044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:aeae00ae cdw11:ae00aeae SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.283059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.283123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:aeae00ae cdw11:2c00aeff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.283146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.283215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:6c00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.283231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.283290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.283305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:08:56.393 #52 NEW cov: 11747 ft: 14714 corp: 37/934b lim: 35 exec/s: 52 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:08:56.393 [2024-07-23 13:50:47.332650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:c700c7c6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.332676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.332735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.332750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.393 #53 NEW cov: 11747 ft: 14717 corp: 38/952b lim: 35 exec/s: 53 rss: 71Mb L: 18/35 MS: 1 ChangeBit- 00:08:56.393 [2024-07-23 13:50:47.372878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:c7c7000a cdw11:c700c7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.372904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.372963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:c7c700c7 cdw11:c700e7c7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.372978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.373039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:c7ff00c7 cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.373055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.393 #54 NEW cov: 11747 ft: 14729 corp: 39/977b lim: 35 exec/s: 54 rss: 71Mb L: 25/35 MS: 1 ChangeBit- 00:08:56.393 [2024-07-23 13:50:47.413205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff002b cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.413237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.413298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.413314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.413375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.413391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:56.393 [2024-07-23 13:50:47.413451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff003e cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:56.393 [2024-07-23 13:50:47.413467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:08:56.652 #55 NEW cov: 11747 ft: 14738 corp: 40/1005b lim: 35 exec/s: 27 rss: 71Mb L: 28/35 MS: 1 CrossOver- 00:08:56.652 #55 DONE cov: 11747 ft: 14738 corp: 40/1005b lim: 35 exec/s: 27 rss: 71Mb 00:08:56.652 ###### Recommended dictionary. ###### 00:08:56.652 "\001\000\000\000" # Uses: 2 00:08:56.652 ###### End of recommended dictionary. ###### 00:08:56.652 Done 55 runs in 2 second(s) 00:08:56.652 13:50:47 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_2.conf 00:08:56.652 13:50:47 -- ../common.sh@72 -- # (( i++ )) 00:08:56.652 13:50:47 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:56.652 13:50:47 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:56.652 13:50:47 -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:08:56.652 13:50:47 -- nvmf/run.sh@24 -- # local timen=1 00:08:56.652 13:50:47 -- nvmf/run.sh@25 -- # local core=0x1 00:08:56.652 13:50:47 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:56.652 13:50:47 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:08:56.652 13:50:47 -- nvmf/run.sh@29 -- # printf %02d 3 00:08:56.652 13:50:47 -- nvmf/run.sh@29 -- # port=4403 00:08:56.652 13:50:47 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:56.652 13:50:47 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:08:56.652 13:50:47 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:56.652 13:50:47 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 -r /var/tmp/spdk3.sock 00:08:56.652 [2024-07-23 13:50:47.626763] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:56.652 [2024-07-23 13:50:47.626846] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909956 ] 00:08:56.652 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.219 [2024-07-23 13:50:47.972881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.219 [2024-07-23 13:50:48.082092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:57.219 [2024-07-23 13:50:48.082296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.219 [2024-07-23 13:50:48.144990] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.219 [2024-07-23 13:50:48.161221] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:08:57.219 INFO: Running with entropic power schedule (0xFF, 100). 00:08:57.219 INFO: Seed: 2584527712 00:08:57.219 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:08:57.219 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:08:57.219 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:08:57.219 INFO: A corpus is not provided, starting from an empty corpus 00:08:57.219 #2 INITED exec/s: 0 rss: 61Mb 00:08:57.219 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:57.219 This may also happen if the target rejected all inputs we tried so far 00:08:57.219 [2024-07-23 13:50:48.216798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.219 [2024-07-23 13:50:48.216853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.736 NEW_FUNC[1/679]: 0x485d00 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:08:57.736 NEW_FUNC[2/679]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:57.736 #13 NEW cov: 11756 ft: 11738 corp: 2/21b lim: 20 exec/s: 0 rss: 68Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:08:57.736 [2024-07-23 13:50:48.728046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.736 [2024-07-23 13:50:48.728109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.994 #19 NEW cov: 11869 ft: 12367 corp: 3/41b lim: 20 exec/s: 0 rss: 68Mb L: 20/20 MS: 1 ChangeBinInt- 00:08:57.994 #20 NEW cov: 11875 ft: 12709 corp: 4/61b lim: 20 exec/s: 0 rss: 68Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:08:57.994 [2024-07-23 13:50:48.898302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:57.994 [2024-07-23 13:50:48.898351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:57.994 #21 NEW cov: 11960 ft: 12984 corp: 5/81b lim: 20 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 ChangeBit- 00:08:58.252 #22 NEW cov: 11968 ft: 13203 corp: 6/95b lim: 20 exec/s: 0 rss: 69Mb L: 14/20 MS: 1 EraseBytes- 00:08:58.252 [2024-07-23 13:50:49.068727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:58.252 [2024-07-23 13:50:49.068772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.252 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:58.252 #23 NEW cov: 11985 ft: 13335 corp: 7/115b lim: 20 exec/s: 0 rss: 69Mb L: 20/20 MS: 1 ChangeByte- 00:08:58.252 #24 NEW cov: 11985 ft: 13385 corp: 8/135b lim: 20 exec/s: 24 rss: 69Mb L: 20/20 MS: 1 ChangeBinInt- 00:08:58.509 #25 NEW cov: 11985 ft: 13394 corp: 9/155b lim: 20 exec/s: 25 rss: 69Mb L: 20/20 MS: 1 ChangeByte- 00:08:58.509 #26 NEW cov: 11985 ft: 13477 corp: 10/175b lim: 20 exec/s: 26 rss: 69Mb L: 20/20 MS: 1 ChangeByte- 00:08:58.509 [2024-07-23 13:50:49.389653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:58.509 [2024-07-23 13:50:49.389698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.509 #27 NEW cov: 11985 ft: 13526 corp: 11/195b lim: 20 exec/s: 27 rss: 69Mb L: 20/20 MS: 1 ChangeByte- 00:08:58.509 [2024-07-23 13:50:49.489900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:58.509 [2024-07-23 13:50:49.489944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:58.767 #28 NEW cov: 11985 ft: 13570 corp: 12/215b lim: 20 exec/s: 28 rss: 69Mb L: 20/20 MS: 1 ChangeBit- 00:08:58.767 #29 NEW cov: 11985 ft: 13650 corp: 13/235b lim: 20 exec/s: 29 rss: 69Mb L: 20/20 MS: 1 CrossOver- 00:08:58.767 [2024-07-23 13:50:49.680536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:58.767 [2024-07-23 13:50:49.680583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:58.767 #30 NEW cov: 11988 ft: 13688 corp: 14/255b lim: 20 exec/s: 30 rss: 69Mb L: 20/20 MS: 1 CMP- DE: " \000\000\000"- 00:08:58.767 [2024-07-23 13:50:49.760715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:58.767 [2024-07-23 13:50:49.760761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:08:59.025 #31 NEW cov: 11988 ft: 13811 corp: 15/275b lim: 20 exec/s: 31 rss: 69Mb L: 20/20 MS: 1 PersAutoDict- DE: " \000\000\000"- 00:08:59.025 #32 NEW cov: 11988 ft: 13829 corp: 16/295b lim: 20 exec/s: 32 rss: 69Mb L: 20/20 MS: 1 ChangeByte- 00:08:59.025 [2024-07-23 13:50:49.931051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:59.025 [2024-07-23 13:50:49.931097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:08:59.025 #33 NEW cov: 11989 ft: 13951 corp: 17/312b lim: 20 exec/s: 33 rss: 70Mb L: 17/20 MS: 1 EraseBytes- 00:08:59.284 #34 NEW cov: 11989 ft: 13962 corp: 18/330b lim: 20 exec/s: 34 rss: 70Mb L: 18/20 MS: 1 PersAutoDict- DE: " \000\000\000"- 00:08:59.284 #35 NEW cov: 11996 ft: 14017 corp: 19/348b lim: 20 exec/s: 35 rss: 70Mb L: 18/20 MS: 1 ChangeBinInt- 00:08:59.284 #36 NEW cov: 11996 ft: 14075 corp: 20/360b lim: 20 exec/s: 18 rss: 70Mb L: 12/20 MS: 1 CrossOver- 00:08:59.284 #36 DONE cov: 11996 ft: 14075 corp: 20/360b lim: 20 exec/s: 18 rss: 70Mb 00:08:59.284 ###### Recommended dictionary. ###### 00:08:59.284 " \000\000\000" # Uses: 2 00:08:59.284 ###### End of recommended dictionary. ###### 00:08:59.284 Done 36 runs in 2 second(s) 00:08:59.544 13:50:50 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_3.conf 00:08:59.544 13:50:50 -- ../common.sh@72 -- # (( i++ )) 00:08:59.544 13:50:50 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:59.544 13:50:50 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:59.544 13:50:50 -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:08:59.544 13:50:50 -- nvmf/run.sh@24 -- # local timen=1 00:08:59.544 13:50:50 -- nvmf/run.sh@25 -- # local core=0x1 00:08:59.544 13:50:50 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:59.544 13:50:50 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:08:59.544 13:50:50 -- nvmf/run.sh@29 -- # printf %02d 4 00:08:59.544 13:50:50 -- nvmf/run.sh@29 -- # port=4404 00:08:59.544 13:50:50 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:08:59.544 13:50:50 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:08:59.544 13:50:50 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:59.544 13:50:50 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 -r /var/tmp/spdk4.sock 00:08:59.544 [2024-07-23 13:50:50.424325] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:59.544 [2024-07-23 13:50:50.424396] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3910320 ] 00:08:59.544 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.802 [2024-07-23 13:50:50.798751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.093 [2024-07-23 13:50:50.906685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:00.093 [2024-07-23 13:50:50.906879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.093 [2024-07-23 13:50:50.969507] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.093 [2024-07-23 13:50:50.985725] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:09:00.093 INFO: Running with entropic power schedule (0xFF, 100). 00:09:00.093 INFO: Seed: 1113562296 00:09:00.093 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:00.093 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:00.093 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:09:00.093 INFO: A corpus is not provided, starting from an empty corpus 00:09:00.093 #2 INITED exec/s: 0 rss: 61Mb 00:09:00.093 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:00.093 This may also happen if the target rejected all inputs we tried so far 00:09:00.093 [2024-07-23 13:50:51.052174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.093 [2024-07-23 13:50:51.052222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.093 [2024-07-23 13:50:51.052290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.093 [2024-07-23 13:50:51.052310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.093 [2024-07-23 13:50:51.052374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.093 [2024-07-23 13:50:51.052393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.093 [2024-07-23 13:50:51.052456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.093 [2024-07-23 13:50:51.052482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.093 [2024-07-23 13:50:51.052546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.093 [2024-07-23 13:50:51.052565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:00.661 NEW_FUNC[1/671]: 0x486df0 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:09:00.661 NEW_FUNC[2/671]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:00.661 #6 NEW cov: 11532 ft: 11519 corp: 2/36b lim: 35 exec/s: 0 rss: 68Mb L: 35/35 MS: 4 ChangeBit-CopyPart-CopyPart-InsertRepeatedBytes- 00:09:00.661 [2024-07-23 13:50:51.523665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.523738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.523842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.523874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.523965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.523994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.524088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.524118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.524209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.524250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:00.661 #7 NEW cov: 11645 ft: 12057 corp: 3/71b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ShuffleBytes- 00:09:00.661 [2024-07-23 13:50:51.583369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.583408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.583484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.583503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.583566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.583585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.583649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.583669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.583738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.583757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:00.661 #8 NEW cov: 11651 ft: 12358 corp: 4/106b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:09:00.661 [2024-07-23 13:50:51.643868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.643902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.643980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.643999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.644059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.644079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.644146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.644165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.661 [2024-07-23 13:50:51.644235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:ef0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.661 [2024-07-23 13:50:51.644254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:00.920 #9 NEW cov: 11736 ft: 12613 corp: 5/141b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:09:00.920 [2024-07-23 13:50:51.703779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:ef300003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.920 [2024-07-23 13:50:51.703813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.920 [2024-07-23 13:50:51.703889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.920 [2024-07-23 13:50:51.703908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.920 [2024-07-23 13:50:51.703975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.920 [2024-07-23 13:50:51.703994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.920 [2024-07-23 13:50:51.704059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.920 [2024-07-23 13:50:51.704078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.920 [2024-07-23 13:50:51.704143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.920 [2024-07-23 13:50:51.704162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:00.920 #10 NEW cov: 11736 ft: 12734 corp: 6/176b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeByte- 00:09:00.920 [2024-07-23 13:50:51.753872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.920 [2024-07-23 13:50:51.753905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.920 [2024-07-23 13:50:51.753978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.920 [2024-07-23 13:50:51.753997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.920 [2024-07-23 13:50:51.754061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.920 [2024-07-23 13:50:51.754081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.920 [2024-07-23 13:50:51.754144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.920 [2024-07-23 13:50:51.754163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.920 [2024-07-23 13:50:51.754232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.920 [2024-07-23 13:50:51.754252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:00.920 #11 NEW cov: 11736 ft: 12759 corp: 7/211b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ShuffleBytes- 00:09:00.920 [2024-07-23 13:50:51.804044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:10efefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.804078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.804152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.804171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.804252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.804271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.804336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.804354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.804420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.804441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:00.921 #12 NEW cov: 11736 ft: 12818 corp: 8/246b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:09:00.921 [2024-07-23 13:50:51.854082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.854115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.854195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.854220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.854287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.854306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.854372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.854391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.854458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.854476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:00.921 #13 NEW cov: 11736 ft: 12918 corp: 9/281b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeByte- 00:09:00.921 [2024-07-23 13:50:51.904222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.904255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.904334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.904353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.904422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.904442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.904509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.904528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:00.921 [2024-07-23 13:50:51.904593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:00.921 [2024-07-23 13:50:51.904612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.180 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:01.180 #14 NEW cov: 11759 ft: 13028 corp: 10/316b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ShuffleBytes- 00:09:01.180 [2024-07-23 13:50:51.964385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.180 [2024-07-23 13:50:51.964419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.180 [2024-07-23 13:50:51.964490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.180 [2024-07-23 13:50:51.964509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.180 [2024-07-23 13:50:51.964578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.180 [2024-07-23 13:50:51.964597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.180 [2024-07-23 13:50:51.964662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.180 [2024-07-23 13:50:51.964681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.180 [2024-07-23 13:50:51.964745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:51.964763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.181 #15 NEW cov: 11759 ft: 13064 corp: 11/351b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:09:01.181 [2024-07-23 13:50:52.024740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.024774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.024849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.024868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.024933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.024953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.025017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.025036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.025101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.025120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.181 #16 NEW cov: 11759 ft: 13140 corp: 12/386b lim: 35 exec/s: 16 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:09:01.181 [2024-07-23 13:50:52.084818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.084852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.084920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefefef cdw11:7eef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.084941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.085008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.085027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.085089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.085112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.085178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.085198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.181 #17 NEW cov: 11759 ft: 13170 corp: 13/421b lim: 35 exec/s: 17 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:09:01.181 [2024-07-23 13:50:52.144930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00efef23 cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.144963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.145037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.145056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.145122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.145141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.145206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.145243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.145309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.145327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.181 #18 NEW cov: 11759 ft: 13186 corp: 14/456b lim: 35 exec/s: 18 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:09:01.181 [2024-07-23 13:50:52.195110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.195143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.195224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.195243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.195308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.195327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.195390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.195408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.181 [2024-07-23 13:50:52.195473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.181 [2024-07-23 13:50:52.195492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.440 #19 NEW cov: 11759 ft: 13284 corp: 15/491b lim: 35 exec/s: 19 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:09:01.440 [2024-07-23 13:50:52.245234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.245267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.245342] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.245361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.245423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.245445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.245508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.245528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.245593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:ef0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.245612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.440 #20 NEW cov: 11759 ft: 13342 corp: 16/526b lim: 35 exec/s: 20 rss: 70Mb L: 35/35 MS: 1 CrossOver- 00:09:01.440 [2024-07-23 13:50:52.305423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0100efef cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.305457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.305530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0fef0000 cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.305548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.305614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.305633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.305699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.305718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.305782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.305801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.440 #21 NEW cov: 11759 ft: 13367 corp: 17/561b lim: 35 exec/s: 21 rss: 70Mb L: 35/35 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\017"- 00:09:01.440 [2024-07-23 13:50:52.355561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00efef23 cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.355599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.355672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.355691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.355769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:7eefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.355787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.355852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.355871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.355936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.355954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.440 #22 NEW cov: 11759 ft: 13397 corp: 18/596b lim: 35 exec/s: 22 rss: 70Mb L: 35/35 MS: 1 CrossOver- 00:09:01.440 [2024-07-23 13:50:52.415729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.415763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.440 [2024-07-23 13:50:52.415838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefefef cdw11:7eef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.440 [2024-07-23 13:50:52.415857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.441 [2024-07-23 13:50:52.415924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.441 [2024-07-23 13:50:52.415943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.441 [2024-07-23 13:50:52.416008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.441 [2024-07-23 13:50:52.416026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.441 [2024-07-23 13:50:52.416090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.441 [2024-07-23 13:50:52.416109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.441 #23 NEW cov: 11759 ft: 13411 corp: 19/631b lim: 35 exec/s: 23 rss: 70Mb L: 35/35 MS: 1 CrossOver- 00:09:01.700 [2024-07-23 13:50:52.475877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.475911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.475986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefefef cdw11:7eef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.476005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.476073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.476093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.476157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.476177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.476239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efef7eef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.476259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.700 #24 NEW cov: 11759 ft: 13427 corp: 20/666b lim: 35 exec/s: 24 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:09:01.700 [2024-07-23 13:50:52.536040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.536074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.536149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.536168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.536232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.536251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.536319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.536338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.536405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0f0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.536423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.700 #25 NEW cov: 11759 ft: 13440 corp: 21/701b lim: 35 exec/s: 25 rss: 70Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\017"- 00:09:01.700 [2024-07-23 13:50:52.586018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.586052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.586126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.586145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.586216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.586248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.586315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.586339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.700 #26 NEW cov: 11759 ft: 13510 corp: 22/729b lim: 35 exec/s: 26 rss: 70Mb L: 28/35 MS: 1 EraseBytes- 00:09:01.700 [2024-07-23 13:50:52.636376] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.636410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.636486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.636505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.636585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.636603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.636668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.636686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.636751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efefefef cdw11:0b0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.636769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.700 #27 NEW cov: 11759 ft: 13550 corp: 23/764b lim: 35 exec/s: 27 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:09:01.700 [2024-07-23 13:50:52.676428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.676462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.676536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.676555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.700 [2024-07-23 13:50:52.676620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.700 [2024-07-23 13:50:52.676640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.701 [2024-07-23 13:50:52.676704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:ef010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.701 [2024-07-23 13:50:52.676723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.701 [2024-07-23 13:50:52.676786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:000f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.701 [2024-07-23 13:50:52.676806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.701 #28 NEW cov: 11759 ft: 13585 corp: 24/799b lim: 35 exec/s: 28 rss: 70Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\017"- 00:09:01.960 [2024-07-23 13:50:52.726682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.726716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.726791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.726810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.726873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.726893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.726958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.726977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.727042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efeaefef cdw11:ef0b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.727060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.960 #29 NEW cov: 11759 ft: 13613 corp: 25/834b lim: 35 exec/s: 29 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:09:01.960 [2024-07-23 13:50:52.786557] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.786591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.786667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefefea cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.786686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.786751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.786771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.786836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:ef0b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.786855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.960 #31 NEW cov: 11759 ft: 13618 corp: 26/864b lim: 35 exec/s: 31 rss: 70Mb L: 30/35 MS: 2 CrossOver-CrossOver- 00:09:01.960 [2024-07-23 13:50:52.836958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.836992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.837067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.837087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.837152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.837174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.837235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:0f010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.837255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.837321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:000f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.837340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.960 #32 NEW cov: 11759 ft: 13631 corp: 27/899b lim: 35 exec/s: 32 rss: 70Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\017"- 00:09:01.960 [2024-07-23 13:50:52.897147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:55efefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.897180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.897260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefeaef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.960 [2024-07-23 13:50:52.897279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.960 [2024-07-23 13:50:52.897345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.961 [2024-07-23 13:50:52.897363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.961 [2024-07-23 13:50:52.897430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.961 [2024-07-23 13:50:52.897450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:01.961 [2024-07-23 13:50:52.897517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:efeaefef cdw11:ef0b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.961 [2024-07-23 13:50:52.897536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:01.961 #33 NEW cov: 11759 ft: 13641 corp: 28/934b lim: 35 exec/s: 33 rss: 70Mb L: 35/35 MS: 1 ChangeByte- 00:09:01.961 [2024-07-23 13:50:52.957108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.961 [2024-07-23 13:50:52.957142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:01.961 [2024-07-23 13:50:52.957222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:efefefef cdw11:efef0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.961 [2024-07-23 13:50:52.957241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:01.961 [2024-07-23 13:50:52.957308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.961 [2024-07-23 13:50:52.957327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:01.961 [2024-07-23 13:50:52.957402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:0b0b0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:01.961 [2024-07-23 13:50:52.957424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:02.220 #34 NEW cov: 11759 ft: 13685 corp: 29/962b lim: 35 exec/s: 34 rss: 70Mb L: 28/35 MS: 1 EraseBytes- 00:09:02.221 [2024-07-23 13:50:53.006726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000a101 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.221 [2024-07-23 13:50:53.006759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:02.221 #36 NEW cov: 11759 ft: 14545 corp: 30/972b lim: 35 exec/s: 18 rss: 70Mb L: 10/35 MS: 2 InsertByte-CMP- DE: "\001\000\000\000\000\000\000\006"- 00:09:02.221 #36 DONE cov: 11759 ft: 14545 corp: 30/972b lim: 35 exec/s: 18 rss: 70Mb 00:09:02.221 ###### Recommended dictionary. ###### 00:09:02.221 "\001\000\000\000\000\000\000\017" # Uses: 3 00:09:02.221 "\001\000\000\000\000\000\000\006" # Uses: 0 00:09:02.221 ###### End of recommended dictionary. ###### 00:09:02.221 Done 36 runs in 2 second(s) 00:09:02.221 13:50:53 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_4.conf 00:09:02.221 13:50:53 -- ../common.sh@72 -- # (( i++ )) 00:09:02.221 13:50:53 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:02.221 13:50:53 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:09:02.221 13:50:53 -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:09:02.221 13:50:53 -- nvmf/run.sh@24 -- # local timen=1 00:09:02.221 13:50:53 -- nvmf/run.sh@25 -- # local core=0x1 00:09:02.221 13:50:53 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:09:02.221 13:50:53 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:09:02.221 13:50:53 -- nvmf/run.sh@29 -- # printf %02d 5 00:09:02.221 13:50:53 -- nvmf/run.sh@29 -- # port=4405 00:09:02.221 13:50:53 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:09:02.221 13:50:53 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:09:02.221 13:50:53 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:02.221 13:50:53 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 -r /var/tmp/spdk5.sock 00:09:02.221 [2024-07-23 13:50:53.231002] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:02.221 [2024-07-23 13:50:53.231077] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3910693 ] 00:09:02.480 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.737 [2024-07-23 13:50:53.601929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.737 [2024-07-23 13:50:53.708447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:02.737 [2024-07-23 13:50:53.708638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.995 [2024-07-23 13:50:53.771274] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.995 [2024-07-23 13:50:53.787508] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:09:02.995 INFO: Running with entropic power schedule (0xFF, 100). 00:09:02.995 INFO: Seed: 3914568694 00:09:02.995 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:02.995 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:02.995 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:09:02.995 INFO: A corpus is not provided, starting from an empty corpus 00:09:02.995 #2 INITED exec/s: 0 rss: 61Mb 00:09:02.995 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:02.995 This may also happen if the target rejected all inputs we tried so far 00:09:02.995 [2024-07-23 13:50:53.865019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:02.995 [2024-07-23 13:50:53.865074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.254 NEW_FUNC[1/671]: 0x488f80 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:09:03.254 NEW_FUNC[2/671]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:03.254 #3 NEW cov: 11543 ft: 11544 corp: 2/10b lim: 45 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:09:03.254 [2024-07-23 13:50:54.215409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.254 [2024-07-23 13:50:54.215457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.254 #8 NEW cov: 11656 ft: 12052 corp: 3/24b lim: 45 exec/s: 0 rss: 69Mb L: 14/14 MS: 5 EraseBytes-ChangeByte-ChangeByte-CopyPart-CrossOver- 00:09:03.512 [2024-07-23 13:50:54.285694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.512 [2024-07-23 13:50:54.285732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.512 #9 NEW cov: 11662 ft: 12312 corp: 4/41b lim: 45 exec/s: 0 rss: 69Mb L: 17/17 MS: 1 CopyPart- 00:09:03.512 [2024-07-23 13:50:54.347028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.512 [2024-07-23 13:50:54.347068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.512 [2024-07-23 13:50:54.347163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.512 [2024-07-23 13:50:54.347184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.512 [2024-07-23 13:50:54.347287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.512 [2024-07-23 13:50:54.347309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.512 [2024-07-23 13:50:54.347407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.512 [2024-07-23 13:50:54.347429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.512 #10 NEW cov: 11747 ft: 13499 corp: 5/83b lim: 45 exec/s: 0 rss: 69Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:09:03.512 [2024-07-23 13:50:54.416062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.512 [2024-07-23 13:50:54.416098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.512 #11 NEW cov: 11747 ft: 13532 corp: 6/94b lim: 45 exec/s: 0 rss: 69Mb L: 11/42 MS: 1 EraseBytes- 00:09:03.512 [2024-07-23 13:50:54.486343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.512 [2024-07-23 13:50:54.486379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.512 #12 NEW cov: 11747 ft: 13573 corp: 7/111b lim: 45 exec/s: 0 rss: 69Mb L: 17/42 MS: 1 ChangeByte- 00:09:03.771 [2024-07-23 13:50:54.557866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.771 [2024-07-23 13:50:54.557901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.771 [2024-07-23 13:50:54.557999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.771 [2024-07-23 13:50:54.558021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.771 [2024-07-23 13:50:54.558115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.771 [2024-07-23 13:50:54.558136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.771 [2024-07-23 13:50:54.558233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.771 [2024-07-23 13:50:54.558256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.771 #13 NEW cov: 11747 ft: 13630 corp: 8/153b lim: 45 exec/s: 0 rss: 69Mb L: 42/42 MS: 1 ShuffleBytes- 00:09:03.771 [2024-07-23 13:50:54.628047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.771 [2024-07-23 13:50:54.628081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.771 [2024-07-23 13:50:54.628174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.771 [2024-07-23 13:50:54.628195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:03.771 [2024-07-23 13:50:54.628301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.771 [2024-07-23 13:50:54.628325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:03.771 [2024-07-23 13:50:54.628424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.771 [2024-07-23 13:50:54.628445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:03.771 #14 NEW cov: 11747 ft: 13684 corp: 9/195b lim: 45 exec/s: 0 rss: 69Mb L: 42/42 MS: 1 ChangeBinInt- 00:09:03.771 [2024-07-23 13:50:54.687237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:78f40a3b cdw11:1ced0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.771 [2024-07-23 13:50:54.687272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.771 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:03.771 #15 NEW cov: 11770 ft: 13787 corp: 10/204b lim: 45 exec/s: 0 rss: 69Mb L: 9/42 MS: 1 CMP- DE: ";x\364\034\3556-\000"- 00:09:03.771 [2024-07-23 13:50:54.747422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:78f40a3b cdw11:1ced0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:03.771 [2024-07-23 13:50:54.747457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:03.771 #16 NEW cov: 11770 ft: 13871 corp: 11/213b lim: 45 exec/s: 0 rss: 69Mb L: 9/42 MS: 1 PersAutoDict- DE: ";x\364\034\3556-\000"- 00:09:04.030 [2024-07-23 13:50:54.807597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff11ffff cdw11:00ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.030 [2024-07-23 13:50:54.807636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.030 #17 NEW cov: 11770 ft: 13950 corp: 12/227b lim: 45 exec/s: 17 rss: 69Mb L: 14/42 MS: 1 CMP- DE: "\021\000"- 00:09:04.030 [2024-07-23 13:50:54.867806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.030 [2024-07-23 13:50:54.867841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.030 #18 NEW cov: 11770 ft: 13965 corp: 13/244b lim: 45 exec/s: 18 rss: 69Mb L: 17/42 MS: 1 CopyPart- 00:09:04.030 [2024-07-23 13:50:54.928138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.030 [2024-07-23 13:50:54.928172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.030 #19 NEW cov: 11770 ft: 13978 corp: 14/258b lim: 45 exec/s: 19 rss: 69Mb L: 14/42 MS: 1 CopyPart- 00:09:04.030 [2024-07-23 13:50:54.988391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:bfff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.030 [2024-07-23 13:50:54.988428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.030 #20 NEW cov: 11770 ft: 13986 corp: 15/267b lim: 45 exec/s: 20 rss: 69Mb L: 9/42 MS: 1 ChangeBit- 00:09:04.030 [2024-07-23 13:50:55.048843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.030 [2024-07-23 13:50:55.048878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.290 #21 NEW cov: 11770 ft: 14070 corp: 16/283b lim: 45 exec/s: 21 rss: 69Mb L: 16/42 MS: 1 InsertRepeatedBytes- 00:09:04.290 [2024-07-23 13:50:55.119047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:44780a3b cdw11:f41c0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.290 [2024-07-23 13:50:55.119083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.290 #22 NEW cov: 11770 ft: 14077 corp: 17/293b lim: 45 exec/s: 22 rss: 69Mb L: 10/42 MS: 1 InsertByte- 00:09:04.290 [2024-07-23 13:50:55.190127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.290 [2024-07-23 13:50:55.190162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.290 [2024-07-23 13:50:55.190265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3c3c3c3c cdw11:3c3c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.290 [2024-07-23 13:50:55.190288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.290 [2024-07-23 13:50:55.190387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:3c3c3c3c cdw11:3c3c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.290 [2024-07-23 13:50:55.190410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.290 #23 NEW cov: 11770 ft: 14333 corp: 18/321b lim: 45 exec/s: 23 rss: 69Mb L: 28/42 MS: 1 InsertRepeatedBytes- 00:09:04.290 [2024-07-23 13:50:55.250291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.290 [2024-07-23 13:50:55.250327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.290 [2024-07-23 13:50:55.250426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.290 [2024-07-23 13:50:55.250450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.290 [2024-07-23 13:50:55.250547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.290 [2024-07-23 13:50:55.250569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.290 #24 NEW cov: 11770 ft: 14381 corp: 19/348b lim: 45 exec/s: 24 rss: 69Mb L: 27/42 MS: 1 EraseBytes- 00:09:04.290 [2024-07-23 13:50:55.309796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:44780a3b cdw11:2ff40000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.290 [2024-07-23 13:50:55.309832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.549 #25 NEW cov: 11770 ft: 14434 corp: 20/359b lim: 45 exec/s: 25 rss: 70Mb L: 11/42 MS: 1 InsertByte- 00:09:04.549 [2024-07-23 13:50:55.381204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.549 [2024-07-23 13:50:55.381243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.549 [2024-07-23 13:50:55.381334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3c3c3c3c cdw11:3cff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.549 [2024-07-23 13:50:55.381356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.549 [2024-07-23 13:50:55.381453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.549 [2024-07-23 13:50:55.381476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.549 [2024-07-23 13:50:55.381570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ff3cffff cdw11:3c3c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.549 [2024-07-23 13:50:55.381592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.549 #26 NEW cov: 11770 ft: 14447 corp: 21/403b lim: 45 exec/s: 26 rss: 70Mb L: 44/44 MS: 1 InsertRepeatedBytes- 00:09:04.549 [2024-07-23 13:50:55.451614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.550 [2024-07-23 13:50:55.451648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.550 [2024-07-23 13:50:55.451749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3c3c3c3c cdw11:3cff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.550 [2024-07-23 13:50:55.451770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.550 [2024-07-23 13:50:55.451871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.550 [2024-07-23 13:50:55.451894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.550 [2024-07-23 13:50:55.451995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ff3cffff cdw11:3c3c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.550 [2024-07-23 13:50:55.452016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.550 #27 NEW cov: 11770 ft: 14458 corp: 22/444b lim: 45 exec/s: 27 rss: 70Mb L: 41/44 MS: 1 EraseBytes- 00:09:04.550 [2024-07-23 13:50:55.521881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.550 [2024-07-23 13:50:55.521914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.550 [2024-07-23 13:50:55.522008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3c3c3c3c cdw11:3c3c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.550 [2024-07-23 13:50:55.522030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.550 [2024-07-23 13:50:55.522126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffff3cff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.550 [2024-07-23 13:50:55.522149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.550 [2024-07-23 13:50:55.522247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ff3cffff cdw11:3c3c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.550 [2024-07-23 13:50:55.522270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:04.550 #28 NEW cov: 11770 ft: 14483 corp: 23/488b lim: 45 exec/s: 28 rss: 70Mb L: 44/44 MS: 1 CopyPart- 00:09:04.809 [2024-07-23 13:50:55.581758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.809 [2024-07-23 13:50:55.581794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.809 [2024-07-23 13:50:55.581895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f1f1fff1 cdw11:f1f10007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.809 [2024-07-23 13:50:55.581917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.809 [2024-07-23 13:50:55.582019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f1f1f1f1 cdw11:ffbf0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.809 [2024-07-23 13:50:55.582040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.809 #29 NEW cov: 11770 ft: 14511 corp: 24/516b lim: 45 exec/s: 29 rss: 70Mb L: 28/44 MS: 1 InsertRepeatedBytes- 00:09:04.809 [2024-07-23 13:50:55.651171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.809 [2024-07-23 13:50:55.651207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.809 #30 NEW cov: 11770 ft: 14531 corp: 25/532b lim: 45 exec/s: 30 rss: 70Mb L: 16/44 MS: 1 ChangeBit- 00:09:04.809 [2024-07-23 13:50:55.712235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00320000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.809 [2024-07-23 13:50:55.712268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.809 [2024-07-23 13:50:55.712366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:f1f1fff1 cdw11:f1f10007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.809 [2024-07-23 13:50:55.712387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.809 [2024-07-23 13:50:55.712492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:f1f1f1f1 cdw11:ffbf0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.809 [2024-07-23 13:50:55.712514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.809 #31 NEW cov: 11770 ft: 14558 corp: 26/560b lim: 45 exec/s: 31 rss: 70Mb L: 28/44 MS: 1 ChangeByte- 00:09:04.809 [2024-07-23 13:50:55.782509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.809 [2024-07-23 13:50:55.782544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:04.809 [2024-07-23 13:50:55.782636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.809 [2024-07-23 13:50:55.782659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:04.809 [2024-07-23 13:50:55.782753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00ff8000 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:04.809 [2024-07-23 13:50:55.782774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:04.809 #32 NEW cov: 11770 ft: 14563 corp: 27/590b lim: 45 exec/s: 32 rss: 70Mb L: 30/44 MS: 1 InsertRepeatedBytes- 00:09:05.069 [2024-07-23 13:50:55.853202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.069 [2024-07-23 13:50:55.853243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:05.069 [2024-07-23 13:50:55.853341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3c3c3c3c cdw11:3cff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.069 [2024-07-23 13:50:55.853363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:05.069 [2024-07-23 13:50:55.853458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.069 [2024-07-23 13:50:55.853480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:05.069 [2024-07-23 13:50:55.853583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ff3cffff cdw11:3c3c0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:05.069 [2024-07-23 13:50:55.853605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:05.069 #33 NEW cov: 11770 ft: 14577 corp: 28/634b lim: 45 exec/s: 16 rss: 70Mb L: 44/44 MS: 1 ChangeBit- 00:09:05.069 #33 DONE cov: 11770 ft: 14577 corp: 28/634b lim: 45 exec/s: 16 rss: 70Mb 00:09:05.069 ###### Recommended dictionary. ###### 00:09:05.069 ";x\364\034\3556-\000" # Uses: 1 00:09:05.069 "\021\000" # Uses: 0 00:09:05.069 ###### End of recommended dictionary. ###### 00:09:05.069 Done 33 runs in 2 second(s) 00:09:05.069 13:50:56 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_5.conf 00:09:05.069 13:50:56 -- ../common.sh@72 -- # (( i++ )) 00:09:05.069 13:50:56 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:05.069 13:50:56 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:09:05.069 13:50:56 -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:09:05.069 13:50:56 -- nvmf/run.sh@24 -- # local timen=1 00:09:05.069 13:50:56 -- nvmf/run.sh@25 -- # local core=0x1 00:09:05.069 13:50:56 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:09:05.069 13:50:56 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:09:05.069 13:50:56 -- nvmf/run.sh@29 -- # printf %02d 6 00:09:05.069 13:50:56 -- nvmf/run.sh@29 -- # port=4406 00:09:05.069 13:50:56 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:09:05.069 13:50:56 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:09:05.069 13:50:56 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:05.069 13:50:56 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 -r /var/tmp/spdk6.sock 00:09:05.069 [2024-07-23 13:50:56.076742] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:05.069 [2024-07-23 13:50:56.076824] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911060 ] 00:09:05.328 EAL: No free 2048 kB hugepages reported on node 1 00:09:05.587 [2024-07-23 13:50:56.429091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.587 [2024-07-23 13:50:56.525321] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:05.587 [2024-07-23 13:50:56.525515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.587 [2024-07-23 13:50:56.588289] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.587 [2024-07-23 13:50:56.604520] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:09:05.846 INFO: Running with entropic power schedule (0xFF, 100). 00:09:05.846 INFO: Seed: 2436601325 00:09:05.846 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:05.846 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:05.846 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:09:05.846 INFO: A corpus is not provided, starting from an empty corpus 00:09:05.846 #2 INITED exec/s: 0 rss: 61Mb 00:09:05.846 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:05.846 This may also happen if the target rejected all inputs we tried so far 00:09:05.846 [2024-07-23 13:50:56.660185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005b06 cdw11:00000000 00:09:05.846 [2024-07-23 13:50:56.660234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.104 NEW_FUNC[1/668]: 0x48b790 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:09:06.104 NEW_FUNC[2/668]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:06.104 #4 NEW cov: 11459 ft: 11461 corp: 2/3b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 2 ChangeBinInt-InsertByte- 00:09:06.364 [2024-07-23 13:50:57.131780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.364 [2024-07-23 13:50:57.131830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.364 [2024-07-23 13:50:57.131897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.364 [2024-07-23 13:50:57.131917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.364 [2024-07-23 13:50:57.131981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.364 [2024-07-23 13:50:57.131999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.364 [2024-07-23 13:50:57.132065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.364 [2024-07-23 13:50:57.132083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.364 NEW_FUNC[1/1]: 0xebab10 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:09:06.364 #6 NEW cov: 11573 ft: 12224 corp: 3/12b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 2 EraseBytes-InsertRepeatedBytes- 00:09:06.364 [2024-07-23 13:50:57.201452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005b0a cdw11:00000000 00:09:06.364 [2024-07-23 13:50:57.201488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.364 #7 NEW cov: 11579 ft: 12472 corp: 4/15b lim: 10 exec/s: 0 rss: 69Mb L: 3/9 MS: 1 CrossOver- 00:09:06.365 [2024-07-23 13:50:57.251527] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a5b cdw11:00000000 00:09:06.365 [2024-07-23 13:50:57.251561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.365 #8 NEW cov: 11664 ft: 12760 corp: 5/18b lim: 10 exec/s: 0 rss: 69Mb L: 3/9 MS: 1 InsertByte- 00:09:06.365 [2024-07-23 13:50:57.302172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.365 [2024-07-23 13:50:57.302206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.365 [2024-07-23 13:50:57.302281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.365 [2024-07-23 13:50:57.302301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.365 [2024-07-23 13:50:57.302369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00005b06 cdw11:00000000 00:09:06.365 [2024-07-23 13:50:57.302389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.365 [2024-07-23 13:50:57.302455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.365 [2024-07-23 13:50:57.302474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.365 #9 NEW cov: 11664 ft: 12839 corp: 6/27b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 CrossOver- 00:09:06.365 [2024-07-23 13:50:57.362298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.365 [2024-07-23 13:50:57.362332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.365 [2024-07-23 13:50:57.362396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000507 cdw11:00000000 00:09:06.365 [2024-07-23 13:50:57.362417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.365 [2024-07-23 13:50:57.362484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.365 [2024-07-23 13:50:57.362503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.365 [2024-07-23 13:50:57.362567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.365 [2024-07-23 13:50:57.362586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.624 #10 NEW cov: 11664 ft: 12970 corp: 7/36b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 ChangeBinInt- 00:09:06.624 [2024-07-23 13:50:57.412044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005b0b cdw11:00000000 00:09:06.624 [2024-07-23 13:50:57.412080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.624 #11 NEW cov: 11664 ft: 13056 corp: 8/39b lim: 10 exec/s: 0 rss: 69Mb L: 3/9 MS: 1 ChangeBit- 00:09:06.624 [2024-07-23 13:50:57.472631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002aff cdw11:00000000 00:09:06.624 [2024-07-23 13:50:57.472668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.624 [2024-07-23 13:50:57.472736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:06.624 [2024-07-23 13:50:57.472756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.624 [2024-07-23 13:50:57.472822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:06.624 [2024-07-23 13:50:57.472841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.624 [2024-07-23 13:50:57.472906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff5b cdw11:00000000 00:09:06.624 [2024-07-23 13:50:57.472925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.624 #12 NEW cov: 11664 ft: 13119 corp: 9/48b lim: 10 exec/s: 0 rss: 69Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:09:06.624 [2024-07-23 13:50:57.532403] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000060b cdw11:00000000 00:09:06.624 [2024-07-23 13:50:57.532437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.624 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:06.624 #13 NEW cov: 11687 ft: 13194 corp: 10/51b lim: 10 exec/s: 0 rss: 69Mb L: 3/9 MS: 1 CrossOver- 00:09:06.625 [2024-07-23 13:50:57.592675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005b30 cdw11:00000000 00:09:06.625 [2024-07-23 13:50:57.592709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.625 [2024-07-23 13:50:57.592777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a06 cdw11:00000000 00:09:06.625 [2024-07-23 13:50:57.592797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.625 #14 NEW cov: 11687 ft: 13378 corp: 11/55b lim: 10 exec/s: 0 rss: 69Mb L: 4/9 MS: 1 InsertByte- 00:09:06.625 [2024-07-23 13:50:57.642720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a64 cdw11:00000000 00:09:06.625 [2024-07-23 13:50:57.642753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.884 #15 NEW cov: 11687 ft: 13397 corp: 12/57b lim: 10 exec/s: 15 rss: 69Mb L: 2/9 MS: 1 InsertByte- 00:09:06.884 [2024-07-23 13:50:57.692995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005a30 cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.693030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.693099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a06 cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.693129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.884 #16 NEW cov: 11687 ft: 13447 corp: 13/61b lim: 10 exec/s: 16 rss: 69Mb L: 4/9 MS: 1 ChangeBit- 00:09:06.884 [2024-07-23 13:50:57.753509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.753543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.753608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000505 cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.753629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.753698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.753718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.753784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000009 cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.753804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.884 #17 NEW cov: 11687 ft: 13496 corp: 14/70b lim: 10 exec/s: 17 rss: 69Mb L: 9/9 MS: 1 ChangeBinInt- 00:09:06.884 [2024-07-23 13:50:57.803712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002aff cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.803746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.803811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.803833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.803899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.803919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.803984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff5b cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.804003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.804070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000606 cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.804089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:06.884 #18 NEW cov: 11687 ft: 13579 corp: 15/80b lim: 10 exec/s: 18 rss: 69Mb L: 10/10 MS: 1 CrossOver- 00:09:06.884 [2024-07-23 13:50:57.863898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002aff cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.863932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.863997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.864017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.864083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.864103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.864166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00005b5b cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.864186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:06.884 [2024-07-23 13:50:57.864257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000606 cdw11:00000000 00:09:06.884 [2024-07-23 13:50:57.864277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:06.884 #19 NEW cov: 11687 ft: 13636 corp: 16/90b lim: 10 exec/s: 19 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:09:07.144 [2024-07-23 13:50:57.923771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005dff cdw11:00000000 00:09:07.144 [2024-07-23 13:50:57.923805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.144 [2024-07-23 13:50:57.923876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:07.144 [2024-07-23 13:50:57.923895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.144 [2024-07-23 13:50:57.923966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:07.144 [2024-07-23 13:50:57.923986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.144 #21 NEW cov: 11687 ft: 13773 corp: 17/97b lim: 10 exec/s: 21 rss: 70Mb L: 7/10 MS: 2 ChangeByte-InsertRepeatedBytes- 00:09:07.144 [2024-07-23 13:50:57.974205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000505 cdw11:00000000 00:09:07.144 [2024-07-23 13:50:57.974245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.144 [2024-07-23 13:50:57.974312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000505 cdw11:00000000 00:09:07.144 [2024-07-23 13:50:57.974331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.144 [2024-07-23 13:50:57.974400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000505 cdw11:00000000 00:09:07.144 [2024-07-23 13:50:57.974421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.144 [2024-07-23 13:50:57.974486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00007e05 cdw11:00000000 00:09:07.144 [2024-07-23 13:50:57.974507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.144 [2024-07-23 13:50:57.974571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000055b cdw11:00000000 00:09:07.144 [2024-07-23 13:50:57.974590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:07.144 #22 NEW cov: 11687 ft: 13784 corp: 18/107b lim: 10 exec/s: 22 rss: 70Mb L: 10/10 MS: 1 InsertByte- 00:09:07.144 [2024-07-23 13:50:58.023776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000e5b cdw11:00000000 00:09:07.144 [2024-07-23 13:50:58.023810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.144 #23 NEW cov: 11687 ft: 13829 corp: 19/110b lim: 10 exec/s: 23 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:09:07.144 [2024-07-23 13:50:58.074554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002aff cdw11:00000000 00:09:07.144 [2024-07-23 13:50:58.074588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.144 [2024-07-23 13:50:58.074654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002dff cdw11:00000000 00:09:07.144 [2024-07-23 13:50:58.074673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.144 [2024-07-23 13:50:58.074738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:07.144 [2024-07-23 13:50:58.074757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.144 [2024-07-23 13:50:58.074818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff5b cdw11:00000000 00:09:07.144 [2024-07-23 13:50:58.074842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.144 [2024-07-23 13:50:58.074907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:00000606 cdw11:00000000 00:09:07.144 [2024-07-23 13:50:58.074926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:07.144 #24 NEW cov: 11687 ft: 13845 corp: 20/120b lim: 10 exec/s: 24 rss: 70Mb L: 10/10 MS: 1 ChangeByte- 00:09:07.144 [2024-07-23 13:50:58.124009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007c2a cdw11:00000000 00:09:07.144 [2024-07-23 13:50:58.124042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.144 #26 NEW cov: 11687 ft: 13889 corp: 21/122b lim: 10 exec/s: 26 rss: 70Mb L: 2/10 MS: 2 CrossOver-InsertByte- 00:09:07.404 [2024-07-23 13:50:58.174485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000505 cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.174518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.404 [2024-07-23 13:50:58.174587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000505 cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.174606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.404 [2024-07-23 13:50:58.174674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000505 cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.174692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.404 #27 NEW cov: 11687 ft: 13897 corp: 22/129b lim: 10 exec/s: 27 rss: 70Mb L: 7/10 MS: 1 CrossOver- 00:09:07.404 [2024-07-23 13:50:58.234418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002a5b cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.234452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.404 #28 NEW cov: 11687 ft: 13906 corp: 23/132b lim: 10 exec/s: 28 rss: 70Mb L: 3/10 MS: 1 ChangeByte- 00:09:07.404 [2024-07-23 13:50:58.285173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002aff cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.285206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.404 [2024-07-23 13:50:58.285283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.285303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.404 [2024-07-23 13:50:58.285369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.285388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.404 [2024-07-23 13:50:58.285453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ff5b cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.285473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.404 [2024-07-23 13:50:58.285540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000006ff cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.285559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:07.404 #29 NEW cov: 11687 ft: 13928 corp: 24/142b lim: 10 exec/s: 29 rss: 70Mb L: 10/10 MS: 1 CopyPart- 00:09:07.404 [2024-07-23 13:50:58.335199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005dff cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.335238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.404 [2024-07-23 13:50:58.335309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.335329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.404 [2024-07-23 13:50:58.335395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000031ff cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.335414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.404 [2024-07-23 13:50:58.335483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.335501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:07.404 #30 NEW cov: 11687 ft: 13935 corp: 25/150b lim: 10 exec/s: 30 rss: 70Mb L: 8/10 MS: 1 InsertByte- 00:09:07.404 [2024-07-23 13:50:58.395169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005dfe cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.395202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.404 [2024-07-23 13:50:58.395277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.395298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.404 [2024-07-23 13:50:58.395363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:07.404 [2024-07-23 13:50:58.395383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.664 #31 NEW cov: 11687 ft: 13945 corp: 26/157b lim: 10 exec/s: 31 rss: 70Mb L: 7/10 MS: 1 ChangeBit- 00:09:07.664 [2024-07-23 13:50:58.445161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002aff cdw11:00000000 00:09:07.664 [2024-07-23 13:50:58.445193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.664 [2024-07-23 13:50:58.445265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002a5b cdw11:00000000 00:09:07.664 [2024-07-23 13:50:58.445285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.664 #32 NEW cov: 11687 ft: 13959 corp: 27/162b lim: 10 exec/s: 32 rss: 70Mb L: 5/10 MS: 1 CrossOver- 00:09:07.664 [2024-07-23 13:50:58.505532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000060b cdw11:00000000 00:09:07.664 [2024-07-23 13:50:58.505566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.664 [2024-07-23 13:50:58.505634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000060b cdw11:00000000 00:09:07.664 [2024-07-23 13:50:58.505663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.664 [2024-07-23 13:50:58.505731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000606 cdw11:00000000 00:09:07.664 [2024-07-23 13:50:58.505750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.664 #33 NEW cov: 11687 ft: 13976 corp: 28/168b lim: 10 exec/s: 33 rss: 70Mb L: 6/10 MS: 1 CopyPart- 00:09:07.664 [2024-07-23 13:50:58.565395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00005b0a cdw11:00000000 00:09:07.664 [2024-07-23 13:50:58.565428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.664 #34 NEW cov: 11687 ft: 14010 corp: 29/171b lim: 10 exec/s: 34 rss: 70Mb L: 3/10 MS: 1 EraseBytes- 00:09:07.664 [2024-07-23 13:50:58.615810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008505 cdw11:00000000 00:09:07.664 [2024-07-23 13:50:58.615843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:07.664 [2024-07-23 13:50:58.615912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000505 cdw11:00000000 00:09:07.664 [2024-07-23 13:50:58.615933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:07.664 [2024-07-23 13:50:58.616004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000505 cdw11:00000000 00:09:07.664 [2024-07-23 13:50:58.616023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:07.664 #35 NEW cov: 11687 ft: 14043 corp: 30/178b lim: 10 exec/s: 17 rss: 70Mb L: 7/10 MS: 1 ChangeBit- 00:09:07.664 #35 DONE cov: 11687 ft: 14043 corp: 30/178b lim: 10 exec/s: 17 rss: 70Mb 00:09:07.664 Done 35 runs in 2 second(s) 00:09:07.924 13:50:58 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_6.conf 00:09:07.924 13:50:58 -- ../common.sh@72 -- # (( i++ )) 00:09:07.924 13:50:58 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:07.924 13:50:58 -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:09:07.924 13:50:58 -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:09:07.924 13:50:58 -- nvmf/run.sh@24 -- # local timen=1 00:09:07.924 13:50:58 -- nvmf/run.sh@25 -- # local core=0x1 00:09:07.924 13:50:58 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:09:07.924 13:50:58 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:09:07.924 13:50:58 -- nvmf/run.sh@29 -- # printf %02d 7 00:09:07.924 13:50:58 -- nvmf/run.sh@29 -- # port=4407 00:09:07.924 13:50:58 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:09:07.924 13:50:58 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:09:07.924 13:50:58 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:07.924 13:50:58 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 -r /var/tmp/spdk7.sock 00:09:07.924 [2024-07-23 13:50:58.850811] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:07.924 [2024-07-23 13:50:58.850899] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911424 ] 00:09:07.924 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.493 [2024-07-23 13:50:59.223142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.493 [2024-07-23 13:50:59.321827] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:08.493 [2024-07-23 13:50:59.322020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.493 [2024-07-23 13:50:59.384616] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.493 [2024-07-23 13:50:59.400844] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:09:08.493 INFO: Running with entropic power schedule (0xFF, 100). 00:09:08.493 INFO: Seed: 937645067 00:09:08.493 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:08.493 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:08.493 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:09:08.493 INFO: A corpus is not provided, starting from an empty corpus 00:09:08.493 #2 INITED exec/s: 0 rss: 61Mb 00:09:08.493 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:08.493 This may also happen if the target rejected all inputs we tried so far 00:09:08.493 [2024-07-23 13:50:59.466836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000178c cdw11:00000000 00:09:08.493 [2024-07-23 13:50:59.466877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:08.493 [2024-07-23 13:50:59.466948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005fc0 cdw11:00000000 00:09:08.493 [2024-07-23 13:50:59.466968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:08.493 [2024-07-23 13:50:59.467029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ea36 cdw11:00000000 00:09:08.493 [2024-07-23 13:50:59.467048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:08.493 [2024-07-23 13:50:59.467111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002d00 cdw11:00000000 00:09:08.493 [2024-07-23 13:50:59.467130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.061 NEW_FUNC[1/669]: 0x48c180 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:09:09.061 NEW_FUNC[2/669]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:09.062 #3 NEW cov: 11450 ft: 11460 corp: 2/10b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 CMP- DE: "\027\214_\300\3526-\000"- 00:09:09.062 [2024-07-23 13:50:59.950280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000178c cdw11:00000000 00:09:09.062 [2024-07-23 13:50:59.950332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.062 [2024-07-23 13:50:59.950431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000dfc0 cdw11:00000000 00:09:09.062 [2024-07-23 13:50:59.950453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.062 [2024-07-23 13:50:59.950548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ea36 cdw11:00000000 00:09:09.062 [2024-07-23 13:50:59.950568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.062 [2024-07-23 13:50:59.950663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002d00 cdw11:00000000 00:09:09.062 [2024-07-23 13:50:59.950682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.062 #4 NEW cov: 11573 ft: 12042 corp: 3/19b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 ChangeBit- 00:09:09.062 [2024-07-23 13:51:00.020366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000178c cdw11:00000000 00:09:09.062 [2024-07-23 13:51:00.020404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.062 [2024-07-23 13:51:00.020502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005fc0 cdw11:00000000 00:09:09.062 [2024-07-23 13:51:00.020526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.062 [2024-07-23 13:51:00.020621] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003600 cdw11:00000000 00:09:09.062 [2024-07-23 13:51:00.020644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.062 [2024-07-23 13:51:00.020739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ea2d cdw11:00000000 00:09:09.062 [2024-07-23 13:51:00.020761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.062 #5 NEW cov: 11579 ft: 12279 corp: 4/28b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:09.062 [2024-07-23 13:51:00.080560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000178c cdw11:00000000 00:09:09.062 [2024-07-23 13:51:00.080599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.062 [2024-07-23 13:51:00.080699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005fc0 cdw11:00000000 00:09:09.062 [2024-07-23 13:51:00.080722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.062 [2024-07-23 13:51:00.080820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ea33 cdw11:00000000 00:09:09.062 [2024-07-23 13:51:00.080840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.062 [2024-07-23 13:51:00.080925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002d00 cdw11:00000000 00:09:09.062 [2024-07-23 13:51:00.080947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.321 #6 NEW cov: 11664 ft: 12523 corp: 5/37b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 ChangeASCIIInt- 00:09:09.321 [2024-07-23 13:51:00.140871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000178c cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.140909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.321 [2024-07-23 13:51:00.140994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005fc0 cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.141014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.321 [2024-07-23 13:51:00.141111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003618 cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.141132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.321 [2024-07-23 13:51:00.141229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.141251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.321 #7 NEW cov: 11664 ft: 12605 corp: 6/46b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 CMP- DE: "\030\000\000\000"- 00:09:09.321 [2024-07-23 13:51:00.211493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000170a cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.211529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.321 [2024-07-23 13:51:00.211616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008cdf cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.211638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.321 [2024-07-23 13:51:00.211734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c0ea cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.211755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.321 [2024-07-23 13:51:00.211845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000362d cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.211867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.321 [2024-07-23 13:51:00.211961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.211984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:09.321 #8 NEW cov: 11664 ft: 12721 corp: 7/56b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 CrossOver- 00:09:09.321 [2024-07-23 13:51:00.281462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000175f cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.281497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.321 [2024-07-23 13:51:00.281593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c08c cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.281615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.321 [2024-07-23 13:51:00.281702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ea33 cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.281724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.321 [2024-07-23 13:51:00.281811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002d00 cdw11:00000000 00:09:09.321 [2024-07-23 13:51:00.281833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.321 #9 NEW cov: 11664 ft: 12782 corp: 8/65b lim: 10 exec/s: 0 rss: 69Mb L: 9/10 MS: 1 ShuffleBytes- 00:09:09.581 [2024-07-23 13:51:00.351972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000170a cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.352009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.352102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008c0a cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.352125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.352217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008cdf cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.352239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.352336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000c02d cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.352358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.352459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.352480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:09.581 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:09.581 #10 NEW cov: 11687 ft: 12833 corp: 9/75b lim: 10 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 CopyPart- 00:09:09.581 [2024-07-23 13:51:00.422046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000178c cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.422085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.422179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005fc0 cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.422202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.422303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003609 cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.422323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.422419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.422439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.581 #11 NEW cov: 11687 ft: 12862 corp: 10/84b lim: 10 exec/s: 11 rss: 69Mb L: 9/10 MS: 1 ChangeBinInt- 00:09:09.581 [2024-07-23 13:51:00.482463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001736 cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.482500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.482599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.482621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.482714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c05f cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.482736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.482834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008c09 cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.482855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.581 #12 NEW cov: 11687 ft: 12893 corp: 11/93b lim: 10 exec/s: 12 rss: 69Mb L: 9/10 MS: 1 ShuffleBytes- 00:09:09.581 [2024-07-23 13:51:00.552876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000170a cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.552914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.553012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008cdf cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.553033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.553130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c0ea cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.553151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.553253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000322d cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.553275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.581 [2024-07-23 13:51:00.553371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:09:09.581 [2024-07-23 13:51:00.553393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:09.581 #13 NEW cov: 11687 ft: 12914 corp: 12/103b lim: 10 exec/s: 13 rss: 69Mb L: 10/10 MS: 1 ChangeASCIIInt- 00:09:09.841 [2024-07-23 13:51:00.612723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000178c cdw11:00000000 00:09:09.841 [2024-07-23 13:51:00.612761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.841 [2024-07-23 13:51:00.612853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005fc0 cdw11:00000000 00:09:09.841 [2024-07-23 13:51:00.612876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.841 [2024-07-23 13:51:00.612971] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ea36 cdw11:00000000 00:09:09.841 [2024-07-23 13:51:00.612995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.841 [2024-07-23 13:51:00.613090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00002d00 cdw11:00000000 00:09:09.841 [2024-07-23 13:51:00.613114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.841 #14 NEW cov: 11687 ft: 12992 corp: 13/112b lim: 10 exec/s: 14 rss: 69Mb L: 9/10 MS: 1 PersAutoDict- DE: "\027\214_\300\3526-\000"- 00:09:09.842 [2024-07-23 13:51:00.673043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001717 cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.673079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.842 [2024-07-23 13:51:00.673167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008c5f cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.673188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.842 [2024-07-23 13:51:00.673282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c0ea cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.673303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.842 [2024-07-23 13:51:00.673396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000362d cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.673417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.842 #15 NEW cov: 11687 ft: 13026 corp: 14/121b lim: 10 exec/s: 15 rss: 69Mb L: 9/10 MS: 1 PersAutoDict- DE: "\027\214_\300\3526-\000"- 00:09:09.842 [2024-07-23 13:51:00.733264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001717 cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.733300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.842 [2024-07-23 13:51:00.733393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008c5f cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.733415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.842 [2024-07-23 13:51:00.733510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c0ea cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.733533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.842 [2024-07-23 13:51:00.733627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000362d cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.733648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.842 #16 NEW cov: 11687 ft: 13039 corp: 15/130b lim: 10 exec/s: 16 rss: 69Mb L: 9/10 MS: 1 ShuffleBytes- 00:09:09.842 [2024-07-23 13:51:00.803744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000175f cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.803783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:09.842 [2024-07-23 13:51:00.803868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c08c cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.803889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:09.842 [2024-07-23 13:51:00.803974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ea8c cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.803997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:09.842 [2024-07-23 13:51:00.804086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000332d cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.804109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:09.842 [2024-07-23 13:51:00.804202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:09:09.842 [2024-07-23 13:51:00.804229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:09.842 #17 NEW cov: 11687 ft: 13051 corp: 16/140b lim: 10 exec/s: 17 rss: 69Mb L: 10/10 MS: 1 CopyPart- 00:09:10.102 [2024-07-23 13:51:00.873567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005fc0 cdw11:00000000 00:09:10.102 [2024-07-23 13:51:00.873606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:00.873705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ea33 cdw11:00000000 00:09:10.102 [2024-07-23 13:51:00.873726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:00.873826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002d00 cdw11:00000000 00:09:10.102 [2024-07-23 13:51:00.873848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.102 #18 NEW cov: 11687 ft: 13283 corp: 17/147b lim: 10 exec/s: 18 rss: 69Mb L: 7/10 MS: 1 EraseBytes- 00:09:10.102 [2024-07-23 13:51:00.933963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000178c cdw11:00000000 00:09:10.102 [2024-07-23 13:51:00.934001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:00.934088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005f44 cdw11:00000000 00:09:10.102 [2024-07-23 13:51:00.934109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:00.934199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003618 cdw11:00000000 00:09:10.102 [2024-07-23 13:51:00.934226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:00.934322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:09:10.102 [2024-07-23 13:51:00.934344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.102 #19 NEW cov: 11687 ft: 13334 corp: 18/156b lim: 10 exec/s: 19 rss: 69Mb L: 9/10 MS: 1 ChangeBinInt- 00:09:10.102 [2024-07-23 13:51:01.004471] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000170a cdw11:00000000 00:09:10.102 [2024-07-23 13:51:01.004508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:01.004598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008cdf cdw11:00000000 00:09:10.102 [2024-07-23 13:51:01.004621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:01.004709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c0ea cdw11:00000000 00:09:10.102 [2024-07-23 13:51:01.004728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:01.004817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000352d cdw11:00000000 00:09:10.102 [2024-07-23 13:51:01.004837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:01.004932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 00:09:10.102 [2024-07-23 13:51:01.004952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:10.102 #20 NEW cov: 11687 ft: 13365 corp: 19/166b lim: 10 exec/s: 20 rss: 69Mb L: 10/10 MS: 1 ChangeASCIIInt- 00:09:10.102 [2024-07-23 13:51:01.064711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000175f cdw11:00000000 00:09:10.102 [2024-07-23 13:51:01.064748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:01.064839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c08c cdw11:00000000 00:09:10.102 [2024-07-23 13:51:01.064861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:01.064958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ea8c cdw11:00000000 00:09:10.102 [2024-07-23 13:51:01.064980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:01.065066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000332d cdw11:00000000 00:09:10.102 [2024-07-23 13:51:01.065089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.102 [2024-07-23 13:51:01.065185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000002a cdw11:00000000 00:09:10.102 [2024-07-23 13:51:01.065207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:10.102 #21 NEW cov: 11687 ft: 13428 corp: 20/176b lim: 10 exec/s: 21 rss: 70Mb L: 10/10 MS: 1 ChangeBit- 00:09:10.361 [2024-07-23 13:51:01.135042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001717 cdw11:00000000 00:09:10.361 [2024-07-23 13:51:01.135080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.361 [2024-07-23 13:51:01.135167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ea8c cdw11:00000000 00:09:10.361 [2024-07-23 13:51:01.135189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.361 [2024-07-23 13:51:01.135288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00005fc0 cdw11:00000000 00:09:10.361 [2024-07-23 13:51:01.135322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.361 [2024-07-23 13:51:01.135416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ea36 cdw11:00000000 00:09:10.361 [2024-07-23 13:51:01.135439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.362 [2024-07-23 13:51:01.135544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00002d00 cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.135565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:10.362 #22 NEW cov: 11687 ft: 13451 corp: 21/186b lim: 10 exec/s: 22 rss: 70Mb L: 10/10 MS: 1 CrossOver- 00:09:10.362 [2024-07-23 13:51:01.205044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000178c cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.205079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.362 [2024-07-23 13:51:01.205171] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ea33 cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.205193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.362 [2024-07-23 13:51:01.205296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002d00 cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.205319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.362 [2024-07-23 13:51:01.205409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.205431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.362 #23 NEW cov: 11687 ft: 13489 corp: 22/195b lim: 10 exec/s: 23 rss: 70Mb L: 9/10 MS: 1 CrossOver- 00:09:10.362 [2024-07-23 13:51:01.275096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005f33 cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.275132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.362 [2024-07-23 13:51:01.275227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c0ea cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.275249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.362 [2024-07-23 13:51:01.275335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00002d00 cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.275356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.362 #24 NEW cov: 11687 ft: 13502 corp: 23/202b lim: 10 exec/s: 24 rss: 70Mb L: 7/10 MS: 1 ShuffleBytes- 00:09:10.362 [2024-07-23 13:51:01.345893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000170a cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.345929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.362 [2024-07-23 13:51:01.346024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008cdf cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.346047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.362 [2024-07-23 13:51:01.346144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c0ea cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.346167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.362 [2024-07-23 13:51:01.346262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000322d cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.346285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.362 [2024-07-23 13:51:01.346371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000001a cdw11:00000000 00:09:10.362 [2024-07-23 13:51:01.346393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:10.621 #25 NEW cov: 11687 ft: 13530 corp: 24/212b lim: 10 exec/s: 25 rss: 70Mb L: 10/10 MS: 1 ChangeBit- 00:09:10.621 [2024-07-23 13:51:01.415944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000178c cdw11:00000000 00:09:10.621 [2024-07-23 13:51:01.415979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.621 [2024-07-23 13:51:01.416067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00005fc0 cdw11:00000000 00:09:10.621 [2024-07-23 13:51:01.416088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.621 [2024-07-23 13:51:01.416140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00003700 cdw11:00000000 00:09:10.621 [2024-07-23 13:51:01.416161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.621 [2024-07-23 13:51:01.416260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ea2d cdw11:00000000 00:09:10.621 [2024-07-23 13:51:01.416282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.621 #26 NEW cov: 11687 ft: 13533 corp: 25/221b lim: 10 exec/s: 26 rss: 70Mb L: 9/10 MS: 1 ChangeASCIIInt- 00:09:10.621 [2024-07-23 13:51:01.476126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00001700 cdw11:00000000 00:09:10.621 [2024-07-23 13:51:01.476161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:10.621 [2024-07-23 13:51:01.476233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000365f cdw11:00000000 00:09:10.621 [2024-07-23 13:51:01.476255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:10.622 [2024-07-23 13:51:01.476349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000188c cdw11:00000000 00:09:10.622 [2024-07-23 13:51:01.476371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:10.622 [2024-07-23 13:51:01.476465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000044 cdw11:00000000 00:09:10.622 [2024-07-23 13:51:01.476487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:10.622 #27 NEW cov: 11687 ft: 13539 corp: 26/230b lim: 10 exec/s: 13 rss: 70Mb L: 9/10 MS: 1 ShuffleBytes- 00:09:10.622 #27 DONE cov: 11687 ft: 13539 corp: 26/230b lim: 10 exec/s: 13 rss: 70Mb 00:09:10.622 ###### Recommended dictionary. ###### 00:09:10.622 "\027\214_\300\3526-\000" # Uses: 2 00:09:10.622 "\030\000\000\000" # Uses: 0 00:09:10.622 ###### End of recommended dictionary. ###### 00:09:10.622 Done 27 runs in 2 second(s) 00:09:10.881 13:51:01 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_7.conf 00:09:10.882 13:51:01 -- ../common.sh@72 -- # (( i++ )) 00:09:10.882 13:51:01 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:10.882 13:51:01 -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:09:10.882 13:51:01 -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:09:10.882 13:51:01 -- nvmf/run.sh@24 -- # local timen=1 00:09:10.882 13:51:01 -- nvmf/run.sh@25 -- # local core=0x1 00:09:10.882 13:51:01 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:10.882 13:51:01 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:09:10.882 13:51:01 -- nvmf/run.sh@29 -- # printf %02d 8 00:09:10.882 13:51:01 -- nvmf/run.sh@29 -- # port=4408 00:09:10.882 13:51:01 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:10.882 13:51:01 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:09:10.882 13:51:01 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:10.882 13:51:01 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 -r /var/tmp/spdk8.sock 00:09:10.882 [2024-07-23 13:51:01.699694] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:10.882 [2024-07-23 13:51:01.699790] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911892 ] 00:09:10.882 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.141 [2024-07-23 13:51:02.065297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.141 [2024-07-23 13:51:02.160166] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:11.141 [2024-07-23 13:51:02.160367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.400 [2024-07-23 13:51:02.222892] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.400 [2024-07-23 13:51:02.239128] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:09:11.400 INFO: Running with entropic power schedule (0xFF, 100). 00:09:11.400 INFO: Seed: 3777630868 00:09:11.400 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:11.400 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:11.400 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:11.400 INFO: A corpus is not provided, starting from an empty corpus 00:09:11.400 [2024-07-23 13:51:02.316370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.400 [2024-07-23 13:51:02.316420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.400 #2 INITED cov: 11488 ft: 11462 corp: 1/1b exec/s: 0 rss: 67Mb 00:09:11.400 [2024-07-23 13:51:02.365952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.400 [2024-07-23 13:51:02.365984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.400 #3 NEW cov: 11601 ft: 11991 corp: 2/2b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 ChangeByte- 00:09:11.659 [2024-07-23 13:51:02.426022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.659 [2024-07-23 13:51:02.426058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.659 #4 NEW cov: 11607 ft: 12343 corp: 3/3b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 ChangeByte- 00:09:11.659 [2024-07-23 13:51:02.486475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.659 [2024-07-23 13:51:02.486504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.659 [2024-07-23 13:51:02.486625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.659 [2024-07-23 13:51:02.486645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.660 #5 NEW cov: 11692 ft: 13321 corp: 4/5b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CopyPart- 00:09:11.660 [2024-07-23 13:51:02.536949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.660 [2024-07-23 13:51:02.536980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.660 [2024-07-23 13:51:02.537105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.660 [2024-07-23 13:51:02.537125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.660 [2024-07-23 13:51:02.537244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.660 [2024-07-23 13:51:02.537265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.660 #6 NEW cov: 11692 ft: 13592 corp: 5/8b lim: 5 exec/s: 0 rss: 68Mb L: 3/3 MS: 1 InsertByte- 00:09:11.660 [2024-07-23 13:51:02.596879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.660 [2024-07-23 13:51:02.596908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.660 [2024-07-23 13:51:02.597009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.660 [2024-07-23 13:51:02.597026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.660 #7 NEW cov: 11692 ft: 13673 corp: 6/10b lim: 5 exec/s: 0 rss: 68Mb L: 2/3 MS: 1 CrossOver- 00:09:11.660 [2024-07-23 13:51:02.656806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.660 [2024-07-23 13:51:02.656834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.660 #8 NEW cov: 11692 ft: 13816 corp: 7/11b lim: 5 exec/s: 0 rss: 68Mb L: 1/3 MS: 1 ChangeBit- 00:09:11.919 [2024-07-23 13:51:02.707891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.919 [2024-07-23 13:51:02.707923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.919 [2024-07-23 13:51:02.708041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.919 [2024-07-23 13:51:02.708060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.919 [2024-07-23 13:51:02.708180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.919 [2024-07-23 13:51:02.708198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:11.919 [2024-07-23 13:51:02.708315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.919 [2024-07-23 13:51:02.708340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:11.919 #9 NEW cov: 11692 ft: 14096 corp: 8/15b lim: 5 exec/s: 0 rss: 68Mb L: 4/4 MS: 1 InsertByte- 00:09:11.919 [2024-07-23 13:51:02.767370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.919 [2024-07-23 13:51:02.767399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.919 [2024-07-23 13:51:02.767521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.919 [2024-07-23 13:51:02.767541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:11.919 #10 NEW cov: 11692 ft: 14239 corp: 9/17b lim: 5 exec/s: 0 rss: 68Mb L: 2/4 MS: 1 CrossOver- 00:09:11.919 [2024-07-23 13:51:02.817335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.919 [2024-07-23 13:51:02.817366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.919 #11 NEW cov: 11692 ft: 14262 corp: 10/18b lim: 5 exec/s: 0 rss: 68Mb L: 1/4 MS: 1 EraseBytes- 00:09:11.919 [2024-07-23 13:51:02.867475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.919 [2024-07-23 13:51:02.867504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.919 #12 NEW cov: 11692 ft: 14304 corp: 11/19b lim: 5 exec/s: 0 rss: 68Mb L: 1/4 MS: 1 EraseBytes- 00:09:11.919 [2024-07-23 13:51:02.917995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.919 [2024-07-23 13:51:02.918025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:11.919 [2024-07-23 13:51:02.918156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:11.919 [2024-07-23 13:51:02.918179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.179 #13 NEW cov: 11692 ft: 14390 corp: 12/21b lim: 5 exec/s: 0 rss: 69Mb L: 2/4 MS: 1 InsertByte- 00:09:12.179 [2024-07-23 13:51:02.979154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:02.979185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.179 [2024-07-23 13:51:02.979301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:02.979322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.179 [2024-07-23 13:51:02.979461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:02.979480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.179 [2024-07-23 13:51:02.979594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:02.979620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:12.179 [2024-07-23 13:51:02.979744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:02.979763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:12.179 #14 NEW cov: 11692 ft: 14489 corp: 13/26b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:09:12.179 [2024-07-23 13:51:03.028303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:03.028332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.179 [2024-07-23 13:51:03.028453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:03.028474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.179 #15 NEW cov: 11692 ft: 14522 corp: 14/28b lim: 5 exec/s: 0 rss: 69Mb L: 2/5 MS: 1 ShuffleBytes- 00:09:12.179 [2024-07-23 13:51:03.078842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:03.078871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.179 [2024-07-23 13:51:03.078999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:03.079018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.179 [2024-07-23 13:51:03.079130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:03.079151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.179 #16 NEW cov: 11692 ft: 14548 corp: 15/31b lim: 5 exec/s: 0 rss: 69Mb L: 3/5 MS: 1 CrossOver- 00:09:12.179 [2024-07-23 13:51:03.128650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:03.128680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.179 [2024-07-23 13:51:03.128795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.179 [2024-07-23 13:51:03.128816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.179 #17 NEW cov: 11692 ft: 14578 corp: 16/33b lim: 5 exec/s: 0 rss: 69Mb L: 2/5 MS: 1 ShuffleBytes- 00:09:12.179 [2024-07-23 13:51:03.178866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.180 [2024-07-23 13:51:03.178897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.180 [2024-07-23 13:51:03.179021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.180 [2024-07-23 13:51:03.179044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.701 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:12.701 #18 NEW cov: 11715 ft: 14624 corp: 17/35b lim: 5 exec/s: 18 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:09:12.701 [2024-07-23 13:51:03.651205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.701 [2024-07-23 13:51:03.651258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.701 [2024-07-23 13:51:03.651381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.701 [2024-07-23 13:51:03.651403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.701 #19 NEW cov: 11715 ft: 14635 corp: 18/37b lim: 5 exec/s: 19 rss: 70Mb L: 2/5 MS: 1 CopyPart- 00:09:12.701 [2024-07-23 13:51:03.710816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.701 [2024-07-23 13:51:03.710844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.960 #20 NEW cov: 11715 ft: 14685 corp: 19/38b lim: 5 exec/s: 20 rss: 70Mb L: 1/5 MS: 1 CopyPart- 00:09:12.960 [2024-07-23 13:51:03.761464] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.960 [2024-07-23 13:51:03.761494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.960 [2024-07-23 13:51:03.761584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.960 [2024-07-23 13:51:03.761601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.960 #21 NEW cov: 11715 ft: 14712 corp: 20/40b lim: 5 exec/s: 21 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:09:12.960 [2024-07-23 13:51:03.811332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.960 [2024-07-23 13:51:03.811359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.960 #22 NEW cov: 11715 ft: 14720 corp: 21/41b lim: 5 exec/s: 22 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:09:12.960 [2024-07-23 13:51:03.872283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.960 [2024-07-23 13:51:03.872311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.960 [2024-07-23 13:51:03.872415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.960 [2024-07-23 13:51:03.872432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:12.960 [2024-07-23 13:51:03.872550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.960 [2024-07-23 13:51:03.872567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:12.960 #23 NEW cov: 11715 ft: 14733 corp: 22/44b lim: 5 exec/s: 23 rss: 70Mb L: 3/5 MS: 1 CopyPart- 00:09:12.960 [2024-07-23 13:51:03.921755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.960 [2024-07-23 13:51:03.921784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:12.960 #24 NEW cov: 11715 ft: 14757 corp: 23/45b lim: 5 exec/s: 24 rss: 70Mb L: 1/5 MS: 1 ChangeBit- 00:09:12.960 [2024-07-23 13:51:03.971834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:12.960 [2024-07-23 13:51:03.971863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.219 #25 NEW cov: 11715 ft: 14777 corp: 24/46b lim: 5 exec/s: 25 rss: 70Mb L: 1/5 MS: 1 EraseBytes- 00:09:13.219 [2024-07-23 13:51:04.022128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.219 [2024-07-23 13:51:04.022155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.219 #26 NEW cov: 11715 ft: 14781 corp: 25/47b lim: 5 exec/s: 26 rss: 70Mb L: 1/5 MS: 1 EraseBytes- 00:09:13.219 [2024-07-23 13:51:04.073056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.219 [2024-07-23 13:51:04.073084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.219 [2024-07-23 13:51:04.073184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.219 [2024-07-23 13:51:04.073202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.219 #27 NEW cov: 11715 ft: 14794 corp: 26/49b lim: 5 exec/s: 27 rss: 70Mb L: 2/5 MS: 1 ChangeByte- 00:09:13.219 [2024-07-23 13:51:04.134531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.219 [2024-07-23 13:51:04.134561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.220 [2024-07-23 13:51:04.134656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.220 [2024-07-23 13:51:04.134674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.220 [2024-07-23 13:51:04.134768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.220 [2024-07-23 13:51:04.134788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.220 [2024-07-23 13:51:04.134884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.220 [2024-07-23 13:51:04.134901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:13.220 [2024-07-23 13:51:04.134997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.220 [2024-07-23 13:51:04.135016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:13.220 #28 NEW cov: 11715 ft: 14830 corp: 27/54b lim: 5 exec/s: 28 rss: 70Mb L: 5/5 MS: 1 CopyPart- 00:09:13.220 [2024-07-23 13:51:04.194010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.220 [2024-07-23 13:51:04.194038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.220 [2024-07-23 13:51:04.194146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.220 [2024-07-23 13:51:04.194163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.220 [2024-07-23 13:51:04.194262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.220 [2024-07-23 13:51:04.194280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.220 #29 NEW cov: 11715 ft: 14868 corp: 28/57b lim: 5 exec/s: 29 rss: 70Mb L: 3/5 MS: 1 InsertByte- 00:09:13.479 [2024-07-23 13:51:04.244287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.479 [2024-07-23 13:51:04.244315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.479 [2024-07-23 13:51:04.244424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.479 [2024-07-23 13:51:04.244441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:13.479 [2024-07-23 13:51:04.244546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.479 [2024-07-23 13:51:04.244580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:13.479 #30 NEW cov: 11715 ft: 14927 corp: 29/60b lim: 5 exec/s: 30 rss: 70Mb L: 3/5 MS: 1 ChangeBinInt- 00:09:13.479 [2024-07-23 13:51:04.303571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:13.479 [2024-07-23 13:51:04.303599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:13.479 #31 NEW cov: 11715 ft: 14980 corp: 30/61b lim: 5 exec/s: 15 rss: 70Mb L: 1/5 MS: 1 ShuffleBytes- 00:09:13.479 #31 DONE cov: 11715 ft: 14980 corp: 30/61b lim: 5 exec/s: 15 rss: 70Mb 00:09:13.479 Done 31 runs in 2 second(s) 00:09:13.479 13:51:04 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_8.conf 00:09:13.479 13:51:04 -- ../common.sh@72 -- # (( i++ )) 00:09:13.479 13:51:04 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:13.479 13:51:04 -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:09:13.479 13:51:04 -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:09:13.479 13:51:04 -- nvmf/run.sh@24 -- # local timen=1 00:09:13.479 13:51:04 -- nvmf/run.sh@25 -- # local core=0x1 00:09:13.479 13:51:04 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:13.479 13:51:04 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:09:13.479 13:51:04 -- nvmf/run.sh@29 -- # printf %02d 9 00:09:13.479 13:51:04 -- nvmf/run.sh@29 -- # port=4409 00:09:13.479 13:51:04 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:13.479 13:51:04 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:09:13.479 13:51:04 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:13.479 13:51:04 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 -r /var/tmp/spdk9.sock 00:09:13.743 [2024-07-23 13:51:04.518689] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:13.743 [2024-07-23 13:51:04.518765] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912366 ] 00:09:13.743 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.002 [2024-07-23 13:51:04.813669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.002 [2024-07-23 13:51:04.909709] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:14.002 [2024-07-23 13:51:04.909904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.002 [2024-07-23 13:51:04.972596] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.002 [2024-07-23 13:51:04.988826] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:09:14.002 INFO: Running with entropic power schedule (0xFF, 100). 00:09:14.002 INFO: Seed: 2231663205 00:09:14.261 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:14.261 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:14.261 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:14.261 INFO: A corpus is not provided, starting from an empty corpus 00:09:14.261 [2024-07-23 13:51:05.066360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.261 [2024-07-23 13:51:05.066416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.261 #2 INITED cov: 11480 ft: 11489 corp: 1/1b exec/s: 0 rss: 67Mb 00:09:14.261 [2024-07-23 13:51:05.125942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.261 [2024-07-23 13:51:05.125987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.830 NEW_FUNC[1/1]: 0x195d040 in _reactor_run /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:894 00:09:14.830 #3 NEW cov: 11601 ft: 12083 corp: 2/2b lim: 5 exec/s: 0 rss: 68Mb L: 1/1 MS: 1 CopyPart- 00:09:14.831 [2024-07-23 13:51:05.608244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.831 [2024-07-23 13:51:05.608293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.831 [2024-07-23 13:51:05.608400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.831 [2024-07-23 13:51:05.608423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.831 #4 NEW cov: 11607 ft: 13115 corp: 3/4b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 CopyPart- 00:09:14.831 [2024-07-23 13:51:05.679079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.831 [2024-07-23 13:51:05.679117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.831 [2024-07-23 13:51:05.679223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.831 [2024-07-23 13:51:05.679245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.831 [2024-07-23 13:51:05.679353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.831 [2024-07-23 13:51:05.679376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.831 [2024-07-23 13:51:05.679482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.831 [2024-07-23 13:51:05.679505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.831 #5 NEW cov: 11692 ft: 13714 corp: 4/8b lim: 5 exec/s: 0 rss: 69Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:09:14.831 [2024-07-23 13:51:05.749518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.831 [2024-07-23 13:51:05.749554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.831 [2024-07-23 13:51:05.749651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.831 [2024-07-23 13:51:05.749675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:14.831 [2024-07-23 13:51:05.749779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.831 [2024-07-23 13:51:05.749799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:14.831 [2024-07-23 13:51:05.749900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.831 [2024-07-23 13:51:05.749921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:14.831 #6 NEW cov: 11692 ft: 13827 corp: 5/12b lim: 5 exec/s: 0 rss: 69Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:09:14.831 [2024-07-23 13:51:05.808477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:14.831 [2024-07-23 13:51:05.808515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:14.831 #7 NEW cov: 11692 ft: 13885 corp: 6/13b lim: 5 exec/s: 0 rss: 69Mb L: 1/4 MS: 1 ChangeByte- 00:09:15.089 [2024-07-23 13:51:05.868687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.089 [2024-07-23 13:51:05.868723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.089 #8 NEW cov: 11692 ft: 13955 corp: 7/14b lim: 5 exec/s: 0 rss: 69Mb L: 1/4 MS: 1 ShuffleBytes- 00:09:15.089 [2024-07-23 13:51:05.940566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.089 [2024-07-23 13:51:05.940606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.089 [2024-07-23 13:51:05.940712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.089 [2024-07-23 13:51:05.940733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.089 [2024-07-23 13:51:05.940843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.089 [2024-07-23 13:51:05.940864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.089 [2024-07-23 13:51:05.940963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.089 [2024-07-23 13:51:05.940989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.089 [2024-07-23 13:51:05.941100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.089 [2024-07-23 13:51:05.941122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:15.089 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:15.089 #9 NEW cov: 11715 ft: 14113 corp: 8/19b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:09:15.089 [2024-07-23 13:51:06.010908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.089 [2024-07-23 13:51:06.010944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.089 [2024-07-23 13:51:06.011053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.089 [2024-07-23 13:51:06.011076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.089 [2024-07-23 13:51:06.011178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.089 [2024-07-23 13:51:06.011202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.090 [2024-07-23 13:51:06.011316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.090 [2024-07-23 13:51:06.011339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.090 [2024-07-23 13:51:06.011441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.090 [2024-07-23 13:51:06.011463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:15.090 #10 NEW cov: 11715 ft: 14195 corp: 9/24b lim: 5 exec/s: 10 rss: 69Mb L: 5/5 MS: 1 ChangeByte- 00:09:15.090 [2024-07-23 13:51:06.081139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.090 [2024-07-23 13:51:06.081175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.090 [2024-07-23 13:51:06.081278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.090 [2024-07-23 13:51:06.081303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.090 [2024-07-23 13:51:06.081405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.090 [2024-07-23 13:51:06.081427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.090 [2024-07-23 13:51:06.081531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.090 [2024-07-23 13:51:06.081553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.090 [2024-07-23 13:51:06.081661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.090 [2024-07-23 13:51:06.081687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:15.349 #11 NEW cov: 11715 ft: 14223 corp: 10/29b lim: 5 exec/s: 11 rss: 69Mb L: 5/5 MS: 1 ShuffleBytes- 00:09:15.349 [2024-07-23 13:51:06.149869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.149906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.349 #12 NEW cov: 11715 ft: 14246 corp: 11/30b lim: 5 exec/s: 12 rss: 69Mb L: 1/5 MS: 1 ChangeBit- 00:09:15.349 [2024-07-23 13:51:06.210637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.210674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.349 [2024-07-23 13:51:06.210783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.210806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.349 #13 NEW cov: 11715 ft: 14262 corp: 12/32b lim: 5 exec/s: 13 rss: 69Mb L: 2/5 MS: 1 InsertByte- 00:09:15.349 [2024-07-23 13:51:06.282086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.282121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.349 [2024-07-23 13:51:06.282229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.282251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.349 [2024-07-23 13:51:06.282352] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.282375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.349 [2024-07-23 13:51:06.282477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.282501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.349 [2024-07-23 13:51:06.282608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.282629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:15.349 #14 NEW cov: 11715 ft: 14275 corp: 13/37b lim: 5 exec/s: 14 rss: 69Mb L: 5/5 MS: 1 ShuffleBytes- 00:09:15.349 [2024-07-23 13:51:06.342229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.342264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.349 [2024-07-23 13:51:06.342358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.342380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.349 [2024-07-23 13:51:06.342491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.342514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.349 [2024-07-23 13:51:06.342616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.342637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.349 [2024-07-23 13:51:06.342736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.349 [2024-07-23 13:51:06.342758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:15.607 #15 NEW cov: 11715 ft: 14314 corp: 14/42b lim: 5 exec/s: 15 rss: 69Mb L: 5/5 MS: 1 ChangeBit- 00:09:15.607 [2024-07-23 13:51:06.412515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.607 [2024-07-23 13:51:06.412551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.607 [2024-07-23 13:51:06.412647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.607 [2024-07-23 13:51:06.412669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.607 [2024-07-23 13:51:06.412766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.607 [2024-07-23 13:51:06.412787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.607 [2024-07-23 13:51:06.412892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.607 [2024-07-23 13:51:06.412913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.607 [2024-07-23 13:51:06.413017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.607 [2024-07-23 13:51:06.413039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:15.607 #16 NEW cov: 11715 ft: 14393 corp: 15/47b lim: 5 exec/s: 16 rss: 69Mb L: 5/5 MS: 1 ChangeASCIIInt- 00:09:15.607 [2024-07-23 13:51:06.472841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.607 [2024-07-23 13:51:06.472876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.607 [2024-07-23 13:51:06.472979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.607 [2024-07-23 13:51:06.473001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.607 [2024-07-23 13:51:06.473104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.607 [2024-07-23 13:51:06.473126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.607 [2024-07-23 13:51:06.473243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.607 [2024-07-23 13:51:06.473267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.607 [2024-07-23 13:51:06.473367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.607 [2024-07-23 13:51:06.473388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:15.607 #17 NEW cov: 11715 ft: 14424 corp: 16/52b lim: 5 exec/s: 17 rss: 69Mb L: 5/5 MS: 1 ChangeBit- 00:09:15.607 [2024-07-23 13:51:06.533199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.607 [2024-07-23 13:51:06.533239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.607 [2024-07-23 13:51:06.533346] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.608 [2024-07-23 13:51:06.533367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.608 [2024-07-23 13:51:06.533477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.608 [2024-07-23 13:51:06.533499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.608 [2024-07-23 13:51:06.533602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.608 [2024-07-23 13:51:06.533626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.608 [2024-07-23 13:51:06.533737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.608 [2024-07-23 13:51:06.533759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:15.608 #18 NEW cov: 11715 ft: 14466 corp: 17/57b lim: 5 exec/s: 18 rss: 69Mb L: 5/5 MS: 1 ChangeBinInt- 00:09:15.608 [2024-07-23 13:51:06.603360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.608 [2024-07-23 13:51:06.603395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.608 [2024-07-23 13:51:06.603498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.608 [2024-07-23 13:51:06.603519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.608 [2024-07-23 13:51:06.603633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.608 [2024-07-23 13:51:06.603656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.608 [2024-07-23 13:51:06.603765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.608 [2024-07-23 13:51:06.603788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.608 [2024-07-23 13:51:06.603903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.608 [2024-07-23 13:51:06.603925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:15.866 #19 NEW cov: 11715 ft: 14492 corp: 18/62b lim: 5 exec/s: 19 rss: 70Mb L: 5/5 MS: 1 ChangeBinInt- 00:09:15.866 [2024-07-23 13:51:06.672335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.672370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.672467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.672489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.866 #20 NEW cov: 11715 ft: 14509 corp: 19/64b lim: 5 exec/s: 20 rss: 70Mb L: 2/5 MS: 1 ShuffleBytes- 00:09:15.866 [2024-07-23 13:51:06.743740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.743774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.743883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.743905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.744011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.744033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.744138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.744160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.744263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.744285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:15.866 #21 NEW cov: 11715 ft: 14527 corp: 20/69b lim: 5 exec/s: 21 rss: 70Mb L: 5/5 MS: 1 ChangeASCIIInt- 00:09:15.866 [2024-07-23 13:51:06.804001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.804035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.804141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.804164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.804266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.804289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.804390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.804412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.804512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.804533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:15.866 #22 NEW cov: 11715 ft: 14534 corp: 21/74b lim: 5 exec/s: 22 rss: 70Mb L: 5/5 MS: 1 ShuffleBytes- 00:09:15.866 [2024-07-23 13:51:06.864446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.864481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.864581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.864605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.864713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.864735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.864841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.864863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:15.866 [2024-07-23 13:51:06.864967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:15.866 [2024-07-23 13:51:06.864989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:16.125 #23 NEW cov: 11715 ft: 14568 corp: 22/79b lim: 5 exec/s: 23 rss: 70Mb L: 5/5 MS: 1 ChangeASCIIInt- 00:09:16.125 [2024-07-23 13:51:06.934728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.125 [2024-07-23 13:51:06.934763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.125 [2024-07-23 13:51:06.934863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.125 [2024-07-23 13:51:06.934886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.125 [2024-07-23 13:51:06.934993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.125 [2024-07-23 13:51:06.935014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.125 [2024-07-23 13:51:06.935116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.125 [2024-07-23 13:51:06.935139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.125 [2024-07-23 13:51:06.935234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.125 [2024-07-23 13:51:06.935255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:16.125 #24 NEW cov: 11715 ft: 14649 corp: 23/84b lim: 5 exec/s: 24 rss: 70Mb L: 5/5 MS: 1 CrossOver- 00:09:16.125 [2024-07-23 13:51:07.005132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.125 [2024-07-23 13:51:07.005167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.125 [2024-07-23 13:51:07.005279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.125 [2024-07-23 13:51:07.005300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:16.125 [2024-07-23 13:51:07.005402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.125 [2024-07-23 13:51:07.005424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:16.125 [2024-07-23 13:51:07.005528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.125 [2024-07-23 13:51:07.005550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:16.125 [2024-07-23 13:51:07.005649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:16.125 [2024-07-23 13:51:07.005671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:16.125 #25 NEW cov: 11715 ft: 14658 corp: 24/89b lim: 5 exec/s: 12 rss: 70Mb L: 5/5 MS: 1 ChangeASCIIInt- 00:09:16.125 #25 DONE cov: 11715 ft: 14658 corp: 24/89b lim: 5 exec/s: 12 rss: 70Mb 00:09:16.125 Done 25 runs in 2 second(s) 00:09:16.384 13:51:07 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_9.conf 00:09:16.384 13:51:07 -- ../common.sh@72 -- # (( i++ )) 00:09:16.384 13:51:07 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:16.384 13:51:07 -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:09:16.384 13:51:07 -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:09:16.384 13:51:07 -- nvmf/run.sh@24 -- # local timen=1 00:09:16.384 13:51:07 -- nvmf/run.sh@25 -- # local core=0x1 00:09:16.384 13:51:07 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:16.384 13:51:07 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:09:16.384 13:51:07 -- nvmf/run.sh@29 -- # printf %02d 10 00:09:16.384 13:51:07 -- nvmf/run.sh@29 -- # port=4410 00:09:16.384 13:51:07 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:16.384 13:51:07 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:09:16.384 13:51:07 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:16.384 13:51:07 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 -r /var/tmp/spdk10.sock 00:09:16.384 [2024-07-23 13:51:07.222693] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:16.384 [2024-07-23 13:51:07.222771] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3913090 ] 00:09:16.384 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.642 [2024-07-23 13:51:07.551486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.642 [2024-07-23 13:51:07.659718] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:16.642 [2024-07-23 13:51:07.659911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.900 [2024-07-23 13:51:07.722534] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.901 [2024-07-23 13:51:07.738775] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:09:16.901 INFO: Running with entropic power schedule (0xFF, 100). 00:09:16.901 INFO: Seed: 686688755 00:09:16.901 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:16.901 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:16.901 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:16.901 INFO: A corpus is not provided, starting from an empty corpus 00:09:16.901 #2 INITED exec/s: 0 rss: 61Mb 00:09:16.901 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:16.901 This may also happen if the target rejected all inputs we tried so far 00:09:16.901 [2024-07-23 13:51:07.793999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:08a8a8a8 cdw11:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.901 [2024-07-23 13:51:07.794049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:16.901 [2024-07-23 13:51:07.794104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:a8a8a8a8 cdw11:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:16.901 [2024-07-23 13:51:07.794128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.467 NEW_FUNC[1/670]: 0x48daf0 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:09:17.467 NEW_FUNC[2/670]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:17.467 #12 NEW cov: 11510 ft: 11497 corp: 2/24b lim: 40 exec/s: 0 rss: 68Mb L: 23/23 MS: 5 CrossOver-ChangeBit-CrossOver-ChangeBinInt-InsertRepeatedBytes- 00:09:17.467 [2024-07-23 13:51:08.305357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.467 [2024-07-23 13:51:08.305422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.467 [2024-07-23 13:51:08.305476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.467 [2024-07-23 13:51:08.305502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.467 [2024-07-23 13:51:08.305550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000ab6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.467 [2024-07-23 13:51:08.305574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.467 #15 NEW cov: 11624 ft: 12113 corp: 3/48b lim: 40 exec/s: 0 rss: 68Mb L: 24/24 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:09:17.467 [2024-07-23 13:51:08.385405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0000f300 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.467 [2024-07-23 13:51:08.385449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.467 [2024-07-23 13:51:08.385507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.467 [2024-07-23 13:51:08.385532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.467 [2024-07-23 13:51:08.385578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000ab6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.467 [2024-07-23 13:51:08.385601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.467 #16 NEW cov: 11630 ft: 12462 corp: 4/72b lim: 40 exec/s: 0 rss: 68Mb L: 24/24 MS: 1 ChangeByte- 00:09:17.467 [2024-07-23 13:51:08.475637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.467 [2024-07-23 13:51:08.475684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.467 [2024-07-23 13:51:08.475736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.467 [2024-07-23 13:51:08.475760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.467 [2024-07-23 13:51:08.475807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00020000 cdw11:00000ab6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.467 [2024-07-23 13:51:08.475831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.725 #17 NEW cov: 11715 ft: 12765 corp: 5/96b lim: 40 exec/s: 0 rss: 68Mb L: 24/24 MS: 1 ChangeBit- 00:09:17.725 [2024-07-23 13:51:08.555846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.725 [2024-07-23 13:51:08.555888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.725 [2024-07-23 13:51:08.555940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.725 [2024-07-23 13:51:08.555964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.725 [2024-07-23 13:51:08.556011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.725 [2024-07-23 13:51:08.556034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.725 #18 NEW cov: 11715 ft: 12919 corp: 6/120b lim: 40 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 CopyPart- 00:09:17.725 [2024-07-23 13:51:08.646059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.725 [2024-07-23 13:51:08.646101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.725 [2024-07-23 13:51:08.646154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.725 [2024-07-23 13:51:08.646178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.725 [2024-07-23 13:51:08.646234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00020200 cdw11:00000ab6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.725 [2024-07-23 13:51:08.646258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.725 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:17.725 #19 NEW cov: 11738 ft: 13014 corp: 7/144b lim: 40 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 ChangeBit- 00:09:17.725 [2024-07-23 13:51:08.726302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.725 [2024-07-23 13:51:08.726346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.725 [2024-07-23 13:51:08.726400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.725 [2024-07-23 13:51:08.726424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.725 [2024-07-23 13:51:08.726470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00020200 cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.725 [2024-07-23 13:51:08.726495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.984 #20 NEW cov: 11738 ft: 13068 corp: 8/172b lim: 40 exec/s: 20 rss: 69Mb L: 28/28 MS: 1 CopyPart- 00:09:17.984 [2024-07-23 13:51:08.826518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.984 [2024-07-23 13:51:08.826560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.984 [2024-07-23 13:51:08.826613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.984 [2024-07-23 13:51:08.826639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.984 [2024-07-23 13:51:08.826689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00020200 cdw11:00080ab6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.984 [2024-07-23 13:51:08.826713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.984 #21 NEW cov: 11738 ft: 13173 corp: 9/196b lim: 40 exec/s: 21 rss: 69Mb L: 24/28 MS: 1 ChangeBinInt- 00:09:17.984 [2024-07-23 13:51:08.896752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0000f300 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.984 [2024-07-23 13:51:08.896794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.984 [2024-07-23 13:51:08.896846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00005600 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.984 [2024-07-23 13:51:08.896869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.984 [2024-07-23 13:51:08.896916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000ab6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.984 [2024-07-23 13:51:08.896939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.984 #22 NEW cov: 11738 ft: 13202 corp: 10/220b lim: 40 exec/s: 22 rss: 69Mb L: 24/28 MS: 1 ChangeByte- 00:09:17.984 [2024-07-23 13:51:08.987033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.984 [2024-07-23 13:51:08.987075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:17.984 [2024-07-23 13:51:08.987133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.984 [2024-07-23 13:51:08.987157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:17.984 [2024-07-23 13:51:08.987203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.984 [2024-07-23 13:51:08.987236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:17.984 [2024-07-23 13:51:08.987285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000202 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:17.984 [2024-07-23 13:51:08.987309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.242 #23 NEW cov: 11738 ft: 13720 corp: 11/257b lim: 40 exec/s: 23 rss: 69Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:09:18.242 [2024-07-23 13:51:09.077226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.242 [2024-07-23 13:51:09.077270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.242 [2024-07-23 13:51:09.077322] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.242 [2024-07-23 13:51:09.077347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.242 [2024-07-23 13:51:09.077393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:000000f9 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.243 [2024-07-23 13:51:09.077423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.243 #24 NEW cov: 11738 ft: 13752 corp: 12/282b lim: 40 exec/s: 24 rss: 69Mb L: 25/37 MS: 1 InsertByte- 00:09:18.243 [2024-07-23 13:51:09.147318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:08a8a8a8 cdw11:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.243 [2024-07-23 13:51:09.147361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.243 [2024-07-23 13:51:09.147413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:a8a8a8a8 cdw11:a8a8ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.243 [2024-07-23 13:51:09.147436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.243 #25 NEW cov: 11738 ft: 13835 corp: 13/305b lim: 40 exec/s: 25 rss: 69Mb L: 23/37 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:09:18.243 [2024-07-23 13:51:09.237578] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:08a8a8a8 cdw11:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.243 [2024-07-23 13:51:09.237620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.243 [2024-07-23 13:51:09.237672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:a8a8a8a8 cdw11:a8a8a8a8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.243 [2024-07-23 13:51:09.237695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.501 #26 NEW cov: 11738 ft: 13844 corp: 14/326b lim: 40 exec/s: 26 rss: 69Mb L: 21/37 MS: 1 EraseBytes- 00:09:18.501 [2024-07-23 13:51:09.307841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0000f300 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.501 [2024-07-23 13:51:09.307889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.501 [2024-07-23 13:51:09.307942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000500 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.501 [2024-07-23 13:51:09.307966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.501 [2024-07-23 13:51:09.308012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000ab6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.501 [2024-07-23 13:51:09.308035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.501 #27 NEW cov: 11738 ft: 13856 corp: 15/350b lim: 40 exec/s: 27 rss: 69Mb L: 24/37 MS: 1 ChangeBinInt- 00:09:18.501 [2024-07-23 13:51:09.378033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.501 [2024-07-23 13:51:09.378079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.501 [2024-07-23 13:51:09.378131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.501 [2024-07-23 13:51:09.378155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.501 [2024-07-23 13:51:09.378202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.501 [2024-07-23 13:51:09.378236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.501 #28 NEW cov: 11738 ft: 13874 corp: 16/374b lim: 40 exec/s: 28 rss: 69Mb L: 24/37 MS: 1 ShuffleBytes- 00:09:18.501 [2024-07-23 13:51:09.468362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0000f3eb cdw11:ebebebeb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.501 [2024-07-23 13:51:09.468403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.501 [2024-07-23 13:51:09.468455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ebebebeb cdw11:ebeb0000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.501 [2024-07-23 13:51:09.468479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.501 [2024-07-23 13:51:09.468526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.501 [2024-07-23 13:51:09.468550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.501 [2024-07-23 13:51:09.468595] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.501 [2024-07-23 13:51:09.468619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.760 #29 NEW cov: 11738 ft: 13882 corp: 17/409b lim: 40 exec/s: 29 rss: 69Mb L: 35/37 MS: 1 InsertRepeatedBytes- 00:09:18.760 [2024-07-23 13:51:09.548616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.760 [2024-07-23 13:51:09.548657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.760 [2024-07-23 13:51:09.548709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.760 [2024-07-23 13:51:09.548738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.760 [2024-07-23 13:51:09.548784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.760 [2024-07-23 13:51:09.548808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.760 [2024-07-23 13:51:09.548853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0ab6 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.760 [2024-07-23 13:51:09.548878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:18.760 #30 NEW cov: 11738 ft: 13915 corp: 18/441b lim: 40 exec/s: 30 rss: 69Mb L: 32/37 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:09:18.760 [2024-07-23 13:51:09.628773] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.760 [2024-07-23 13:51:09.628816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.760 [2024-07-23 13:51:09.628869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.760 [2024-07-23 13:51:09.628893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.760 [2024-07-23 13:51:09.628939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00020000 cdw11:0000010d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.760 [2024-07-23 13:51:09.628963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.760 #31 NEW cov: 11738 ft: 13956 corp: 19/465b lim: 40 exec/s: 31 rss: 69Mb L: 24/37 MS: 1 CMP- DE: "\001\015"- 00:09:18.760 [2024-07-23 13:51:09.718985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.760 [2024-07-23 13:51:09.719027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:18.760 [2024-07-23 13:51:09.719079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.760 [2024-07-23 13:51:09.719103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:18.760 [2024-07-23 13:51:09.719149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00020000 cdw11:00580000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:18.760 [2024-07-23 13:51:09.719172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:18.760 #32 NEW cov: 11738 ft: 13998 corp: 20/490b lim: 40 exec/s: 32 rss: 69Mb L: 25/37 MS: 1 InsertByte- 00:09:19.019 [2024-07-23 13:51:09.789259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000025 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.019 [2024-07-23 13:51:09.789303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:19.019 [2024-07-23 13:51:09.789355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.019 [2024-07-23 13:51:09.789380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:19.019 [2024-07-23 13:51:09.789432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:19.019 [2024-07-23 13:51:09.789456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:19.019 #33 NEW cov: 11738 ft: 14002 corp: 21/514b lim: 40 exec/s: 16 rss: 69Mb L: 24/37 MS: 1 ChangeByte- 00:09:19.019 #33 DONE cov: 11738 ft: 14002 corp: 21/514b lim: 40 exec/s: 16 rss: 69Mb 00:09:19.019 ###### Recommended dictionary. ###### 00:09:19.019 "\377\377\377\377\377\377\377\377" # Uses: 1 00:09:19.019 "\001\015" # Uses: 0 00:09:19.019 ###### End of recommended dictionary. ###### 00:09:19.019 Done 33 runs in 2 second(s) 00:09:19.019 13:51:09 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_10.conf 00:09:19.019 13:51:09 -- ../common.sh@72 -- # (( i++ )) 00:09:19.019 13:51:09 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:19.019 13:51:09 -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:09:19.019 13:51:09 -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:09:19.019 13:51:09 -- nvmf/run.sh@24 -- # local timen=1 00:09:19.019 13:51:09 -- nvmf/run.sh@25 -- # local core=0x1 00:09:19.019 13:51:09 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:19.019 13:51:09 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:09:19.019 13:51:09 -- nvmf/run.sh@29 -- # printf %02d 11 00:09:19.019 13:51:09 -- nvmf/run.sh@29 -- # port=4411 00:09:19.019 13:51:09 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:19.019 13:51:10 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:09:19.019 13:51:10 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:19.019 13:51:10 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 -r /var/tmp/spdk11.sock 00:09:19.019 [2024-07-23 13:51:10.035936] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:19.019 [2024-07-23 13:51:10.036010] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3913516 ] 00:09:19.278 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.537 [2024-07-23 13:51:10.350790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.537 [2024-07-23 13:51:10.456745] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:19.537 [2024-07-23 13:51:10.456941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.537 [2024-07-23 13:51:10.519505] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.537 [2024-07-23 13:51:10.535719] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:09:19.537 INFO: Running with entropic power schedule (0xFF, 100). 00:09:19.537 INFO: Seed: 3481691911 00:09:19.796 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:19.796 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:19.796 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:19.796 INFO: A corpus is not provided, starting from an empty corpus 00:09:19.796 #2 INITED exec/s: 0 rss: 61Mb 00:09:19.796 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:19.796 This may also happen if the target rejected all inputs we tried so far 00:09:19.796 [2024-07-23 13:51:10.584900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:19.796 [2024-07-23 13:51:10.584941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.054 NEW_FUNC[1/671]: 0x48f860 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:09:20.054 NEW_FUNC[2/671]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:20.054 #3 NEW cov: 11518 ft: 11519 corp: 2/10b lim: 40 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:09:20.054 [2024-07-23 13:51:11.056208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.054 [2024-07-23 13:51:11.056266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.312 #5 NEW cov: 11636 ft: 12080 corp: 3/19b lim: 40 exec/s: 0 rss: 68Mb L: 9/9 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:20.312 [2024-07-23 13:51:11.106246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:60010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.312 [2024-07-23 13:51:11.106302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.312 #6 NEW cov: 11642 ft: 12269 corp: 4/29b lim: 40 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 InsertByte- 00:09:20.312 [2024-07-23 13:51:11.166428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.312 [2024-07-23 13:51:11.166464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.312 #7 NEW cov: 11727 ft: 12516 corp: 5/38b lim: 40 exec/s: 0 rss: 69Mb L: 9/10 MS: 1 CopyPart- 00:09:20.312 [2024-07-23 13:51:11.226574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.312 [2024-07-23 13:51:11.226609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.312 #8 NEW cov: 11727 ft: 12560 corp: 6/47b lim: 40 exec/s: 0 rss: 69Mb L: 9/10 MS: 1 ShuffleBytes- 00:09:20.312 [2024-07-23 13:51:11.276736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a600100 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.312 [2024-07-23 13:51:11.276771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.312 #9 NEW cov: 11727 ft: 12619 corp: 7/58b lim: 40 exec/s: 0 rss: 69Mb L: 11/11 MS: 1 CrossOver- 00:09:20.312 [2024-07-23 13:51:11.316832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000b00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.312 [2024-07-23 13:51:11.316866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.570 #10 NEW cov: 11727 ft: 12685 corp: 8/67b lim: 40 exec/s: 0 rss: 69Mb L: 9/11 MS: 1 ShuffleBytes- 00:09:20.570 [2024-07-23 13:51:11.366979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.570 [2024-07-23 13:51:11.367013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.570 #11 NEW cov: 11727 ft: 12737 corp: 9/76b lim: 40 exec/s: 0 rss: 69Mb L: 9/11 MS: 1 ChangeBinInt- 00:09:20.570 [2024-07-23 13:51:11.417070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:79000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.570 [2024-07-23 13:51:11.417104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.570 #17 NEW cov: 11727 ft: 12873 corp: 10/85b lim: 40 exec/s: 0 rss: 69Mb L: 9/11 MS: 1 ChangeByte- 00:09:20.570 [2024-07-23 13:51:11.477880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:79000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.570 [2024-07-23 13:51:11.477914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.570 [2024-07-23 13:51:11.477990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.570 [2024-07-23 13:51:11.478010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.570 [2024-07-23 13:51:11.478080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.570 [2024-07-23 13:51:11.478100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.570 [2024-07-23 13:51:11.478172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.570 [2024-07-23 13:51:11.478192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:20.570 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:20.570 #18 NEW cov: 11750 ft: 13756 corp: 11/118b lim: 40 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:09:20.570 [2024-07-23 13:51:11.537460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.570 [2024-07-23 13:51:11.537495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.570 #19 NEW cov: 11750 ft: 13797 corp: 12/126b lim: 40 exec/s: 19 rss: 69Mb L: 8/33 MS: 1 EraseBytes- 00:09:20.828 [2024-07-23 13:51:11.597640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:820affff cdw11:feff0400 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.828 [2024-07-23 13:51:11.597675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.828 #24 NEW cov: 11750 ft: 13815 corp: 13/137b lim: 40 exec/s: 24 rss: 69Mb L: 11/33 MS: 5 ShuffleBytes-InsertRepeatedBytes-ChangeBinInt-InsertByte-CMP- DE: "\004\000\000\000"- 00:09:20.828 [2024-07-23 13:51:11.637915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.828 [2024-07-23 13:51:11.637949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.828 [2024-07-23 13:51:11.638021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:63636363 cdw11:63630100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.828 [2024-07-23 13:51:11.638041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.828 #25 NEW cov: 11750 ft: 14057 corp: 14/160b lim: 40 exec/s: 25 rss: 69Mb L: 23/33 MS: 1 InsertRepeatedBytes- 00:09:20.828 [2024-07-23 13:51:11.687827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0b000000 cdw11:005b0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.828 [2024-07-23 13:51:11.687860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.829 #26 NEW cov: 11750 ft: 14081 corp: 15/170b lim: 40 exec/s: 26 rss: 69Mb L: 10/33 MS: 1 InsertByte- 00:09:20.829 [2024-07-23 13:51:11.728583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.829 [2024-07-23 13:51:11.728616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.829 [2024-07-23 13:51:11.728695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.829 [2024-07-23 13:51:11.728716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.829 [2024-07-23 13:51:11.728788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.829 [2024-07-23 13:51:11.728808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.829 [2024-07-23 13:51:11.728877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.829 [2024-07-23 13:51:11.728897] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:20.829 #27 NEW cov: 11750 ft: 14120 corp: 16/205b lim: 40 exec/s: 27 rss: 69Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:09:20.829 [2024-07-23 13:51:11.788750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:79000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.829 [2024-07-23 13:51:11.788783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.829 [2024-07-23 13:51:11.788857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:0a919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.829 [2024-07-23 13:51:11.788878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:20.829 [2024-07-23 13:51:11.788947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.829 [2024-07-23 13:51:11.788967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:20.829 [2024-07-23 13:51:11.789039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.829 [2024-07-23 13:51:11.789058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:20.829 #28 NEW cov: 11750 ft: 14191 corp: 17/238b lim: 40 exec/s: 28 rss: 70Mb L: 33/35 MS: 1 ShuffleBytes- 00:09:20.829 [2024-07-23 13:51:11.848486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.829 [2024-07-23 13:51:11.848520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:20.829 [2024-07-23 13:51:11.848594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:63636363 cdw11:63630100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:20.829 [2024-07-23 13:51:11.848614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.087 #29 NEW cov: 11750 ft: 14238 corp: 18/261b lim: 40 exec/s: 29 rss: 70Mb L: 23/35 MS: 1 ChangeBit- 00:09:21.087 [2024-07-23 13:51:11.908497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0bfeffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.087 [2024-07-23 13:51:11.908530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.087 [2024-07-23 13:51:11.948634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0bfeffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.087 [2024-07-23 13:51:11.948667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.087 #31 NEW cov: 11750 ft: 14249 corp: 19/270b lim: 40 exec/s: 31 rss: 70Mb L: 9/35 MS: 2 CMP-ShuffleBytes- DE: "\376\377\377\377"- 00:09:21.087 [2024-07-23 13:51:11.988716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:012d36f1 cdw11:71b423a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.087 [2024-07-23 13:51:11.988750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.087 #32 NEW cov: 11750 ft: 14266 corp: 20/279b lim: 40 exec/s: 32 rss: 70Mb L: 9/35 MS: 1 CMP- DE: "\001-6\361q\264#\246"- 00:09:21.087 [2024-07-23 13:51:12.038903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0b010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.087 [2024-07-23 13:51:12.038937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.087 #33 NEW cov: 11750 ft: 14342 corp: 21/288b lim: 40 exec/s: 33 rss: 70Mb L: 9/35 MS: 1 ChangeBinInt- 00:09:21.087 [2024-07-23 13:51:12.099036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a600100 cdw11:003f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.087 [2024-07-23 13:51:12.099070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.345 #34 NEW cov: 11750 ft: 14352 corp: 22/300b lim: 40 exec/s: 34 rss: 70Mb L: 12/35 MS: 1 InsertByte- 00:09:21.345 [2024-07-23 13:51:12.159208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.345 [2024-07-23 13:51:12.159248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.345 #35 NEW cov: 11750 ft: 14367 corp: 23/309b lim: 40 exec/s: 35 rss: 70Mb L: 9/35 MS: 1 CopyPart- 00:09:21.345 [2024-07-23 13:51:12.219373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.345 [2024-07-23 13:51:12.219405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.345 #36 NEW cov: 11750 ft: 14385 corp: 24/318b lim: 40 exec/s: 36 rss: 70Mb L: 9/35 MS: 1 ShuffleBytes- 00:09:21.345 [2024-07-23 13:51:12.270080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:79000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.345 [2024-07-23 13:51:12.270113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.345 [2024-07-23 13:51:12.270188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000a91 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.345 [2024-07-23 13:51:12.270209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.345 [2024-07-23 13:51:12.270290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.345 [2024-07-23 13:51:12.270311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.345 [2024-07-23 13:51:12.270381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:91919191 cdw11:91919191 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.345 [2024-07-23 13:51:12.270401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.345 #37 NEW cov: 11750 ft: 14401 corp: 25/357b lim: 40 exec/s: 37 rss: 70Mb L: 39/39 MS: 1 CrossOver- 00:09:21.345 [2024-07-23 13:51:12.319663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:012d36f1 cdw11:71b423a6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.345 [2024-07-23 13:51:12.319700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.345 #38 NEW cov: 11750 ft: 14421 corp: 26/366b lim: 40 exec/s: 38 rss: 70Mb L: 9/39 MS: 1 PersAutoDict- DE: "\001-6\361q\264#\246"- 00:09:21.604 [2024-07-23 13:51:12.380473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.380506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.604 [2024-07-23 13:51:12.380580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.380599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.604 [2024-07-23 13:51:12.380668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.380686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.604 [2024-07-23 13:51:12.380758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:fffffeff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.380777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.604 #39 NEW cov: 11750 ft: 14443 corp: 27/405b lim: 40 exec/s: 39 rss: 70Mb L: 39/39 MS: 1 PersAutoDict- DE: "\376\377\377\377"- 00:09:21.604 [2024-07-23 13:51:12.440254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:01000000 cdw11:00010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.440288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.604 [2024-07-23 13:51:12.440364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.440384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.604 #40 NEW cov: 11750 ft: 14457 corp: 28/422b lim: 40 exec/s: 40 rss: 70Mb L: 17/39 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:09:21.604 [2024-07-23 13:51:12.491020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:63636363 cdw11:636363ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.491055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.604 [2024-07-23 13:51:12.491131] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.491152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.604 [2024-07-23 13:51:12.491223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.491243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.604 [2024-07-23 13:51:12.491315] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:63636363 cdw11:63636301 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.491335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.604 [2024-07-23 13:51:12.491408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:4000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.491431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:21.604 #41 NEW cov: 11750 ft: 14546 corp: 29/462b lim: 40 exec/s: 41 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:09:21.604 [2024-07-23 13:51:12.551010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:000000fd SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.551045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:21.604 [2024-07-23 13:51:12.551120] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.551139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:21.604 [2024-07-23 13:51:12.551220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.551239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:21.604 [2024-07-23 13:51:12.551306] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:21.604 [2024-07-23 13:51:12.551325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:21.604 #42 NEW cov: 11750 ft: 14562 corp: 30/497b lim: 40 exec/s: 21 rss: 70Mb L: 35/40 MS: 1 ChangeBinInt- 00:09:21.604 #42 DONE cov: 11750 ft: 14562 corp: 30/497b lim: 40 exec/s: 21 rss: 70Mb 00:09:21.604 ###### Recommended dictionary. ###### 00:09:21.604 "\001\000\000\000\000\000\000\000" # Uses: 1 00:09:21.604 "\004\000\000\000" # Uses: 0 00:09:21.604 "\376\377\377\377" # Uses: 1 00:09:21.604 "\001-6\361q\264#\246" # Uses: 1 00:09:21.604 ###### End of recommended dictionary. ###### 00:09:21.604 Done 42 runs in 2 second(s) 00:09:21.863 13:51:12 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_11.conf 00:09:21.863 13:51:12 -- ../common.sh@72 -- # (( i++ )) 00:09:21.863 13:51:12 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:21.863 13:51:12 -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:09:21.863 13:51:12 -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:09:21.863 13:51:12 -- nvmf/run.sh@24 -- # local timen=1 00:09:21.863 13:51:12 -- nvmf/run.sh@25 -- # local core=0x1 00:09:21.863 13:51:12 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:21.863 13:51:12 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:09:21.863 13:51:12 -- nvmf/run.sh@29 -- # printf %02d 12 00:09:21.863 13:51:12 -- nvmf/run.sh@29 -- # port=4412 00:09:21.863 13:51:12 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:21.863 13:51:12 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:09:21.863 13:51:12 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:21.863 13:51:12 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 -r /var/tmp/spdk12.sock 00:09:21.863 [2024-07-23 13:51:12.778065] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:21.863 [2024-07-23 13:51:12.778137] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3913919 ] 00:09:21.863 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.121 [2024-07-23 13:51:13.135773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.379 [2024-07-23 13:51:13.238303] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:22.379 [2024-07-23 13:51:13.238498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.379 [2024-07-23 13:51:13.301097] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.379 [2024-07-23 13:51:13.317334] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:09:22.379 INFO: Running with entropic power schedule (0xFF, 100). 00:09:22.379 INFO: Seed: 1969726093 00:09:22.379 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:22.379 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:22.379 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:09:22.379 INFO: A corpus is not provided, starting from an empty corpus 00:09:22.379 #2 INITED exec/s: 0 rss: 61Mb 00:09:22.379 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:22.379 This may also happen if the target rejected all inputs we tried so far 00:09:22.379 [2024-07-23 13:51:13.383089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:170acaca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.379 [2024-07-23 13:51:13.383128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.895 NEW_FUNC[1/671]: 0x4915d0 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:09:22.895 NEW_FUNC[2/671]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:22.895 #4 NEW cov: 11521 ft: 11522 corp: 2/13b lim: 40 exec/s: 0 rss: 68Mb L: 12/12 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:22.895 [2024-07-23 13:51:13.854209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:170acaca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.895 [2024-07-23 13:51:13.854315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:22.895 #5 NEW cov: 11634 ft: 12070 corp: 3/25b lim: 40 exec/s: 0 rss: 68Mb L: 12/12 MS: 1 ShuffleBytes- 00:09:22.895 [2024-07-23 13:51:13.914244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:22.895 [2024-07-23 13:51:13.914283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.153 #11 NEW cov: 11640 ft: 12436 corp: 4/36b lim: 40 exec/s: 0 rss: 68Mb L: 11/12 MS: 1 InsertRepeatedBytes- 00:09:23.153 [2024-07-23 13:51:13.964384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:170acaca cdw11:cacaceca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.153 [2024-07-23 13:51:13.964420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.153 #12 NEW cov: 11725 ft: 12639 corp: 5/48b lim: 40 exec/s: 0 rss: 68Mb L: 12/12 MS: 1 ChangeBit- 00:09:23.153 [2024-07-23 13:51:14.024604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a636363 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.153 [2024-07-23 13:51:14.024639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.153 #13 NEW cov: 11725 ft: 12785 corp: 6/60b lim: 40 exec/s: 0 rss: 68Mb L: 12/12 MS: 1 InsertByte- 00:09:23.153 [2024-07-23 13:51:14.084915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:277e3b3b cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.153 [2024-07-23 13:51:14.084949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.153 [2024-07-23 13:51:14.085026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.153 [2024-07-23 13:51:14.085045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.153 #23 NEW cov: 11725 ft: 13543 corp: 7/82b lim: 40 exec/s: 0 rss: 68Mb L: 22/22 MS: 5 InsertByte-ChangeByte-InsertByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:09:23.153 [2024-07-23 13:51:14.135055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a6363ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.153 [2024-07-23 13:51:14.135090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.153 [2024-07-23 13:51:14.135160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffff63 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.153 [2024-07-23 13:51:14.135179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.411 #24 NEW cov: 11725 ft: 13625 corp: 8/102b lim: 40 exec/s: 0 rss: 69Mb L: 20/22 MS: 1 InsertRepeatedBytes- 00:09:23.411 [2024-07-23 13:51:14.195201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:170acaca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.411 [2024-07-23 13:51:14.195244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.411 [2024-07-23 13:51:14.195314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cacacaff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.411 [2024-07-23 13:51:14.195333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.411 #25 NEW cov: 11725 ft: 13709 corp: 9/122b lim: 40 exec/s: 0 rss: 69Mb L: 20/22 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\000"- 00:09:23.411 [2024-07-23 13:51:14.245385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:170acaca cdw11:cacaceca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.411 [2024-07-23 13:51:14.245418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.411 [2024-07-23 13:51:14.245488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.411 [2024-07-23 13:51:14.245508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.411 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:23.411 #26 NEW cov: 11748 ft: 13764 corp: 10/142b lim: 40 exec/s: 0 rss: 69Mb L: 20/22 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\000"- 00:09:23.411 [2024-07-23 13:51:14.305509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1708caca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.411 [2024-07-23 13:51:14.305542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.411 [2024-07-23 13:51:14.305613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cacacaff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.411 [2024-07-23 13:51:14.305633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.411 #32 NEW cov: 11748 ft: 13786 corp: 11/162b lim: 40 exec/s: 0 rss: 69Mb L: 20/22 MS: 1 ChangeBit- 00:09:23.411 [2024-07-23 13:51:14.365525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a636363 cdw11:63636341 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.411 [2024-07-23 13:51:14.365563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.411 #33 NEW cov: 11748 ft: 13809 corp: 12/173b lim: 40 exec/s: 33 rss: 69Mb L: 11/22 MS: 1 ChangeByte- 00:09:23.411 [2024-07-23 13:51:14.415631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0a63 cdw11:63636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.411 [2024-07-23 13:51:14.415664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.669 #35 NEW cov: 11748 ft: 13818 corp: 13/184b lim: 40 exec/s: 35 rss: 69Mb L: 11/22 MS: 2 CopyPart-CrossOver- 00:09:23.669 [2024-07-23 13:51:14.455938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:170acaca cdw11:cacaca17 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.669 [2024-07-23 13:51:14.455971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.669 [2024-07-23 13:51:14.456042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0acacaca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.669 [2024-07-23 13:51:14.456063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.669 #36 NEW cov: 11748 ft: 13900 corp: 14/207b lim: 40 exec/s: 36 rss: 69Mb L: 23/23 MS: 1 CopyPart- 00:09:23.669 [2024-07-23 13:51:14.505869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0a0aff cdw11:ff636363 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.669 [2024-07-23 13:51:14.505901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.669 #37 NEW cov: 11748 ft: 13965 corp: 15/218b lim: 40 exec/s: 37 rss: 69Mb L: 11/23 MS: 1 CMP- DE: "\377\377"- 00:09:23.669 [2024-07-23 13:51:14.566077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a636063 cdw11:63636341 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.669 [2024-07-23 13:51:14.566110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.669 #38 NEW cov: 11748 ft: 13983 corp: 16/229b lim: 40 exec/s: 38 rss: 69Mb L: 11/23 MS: 1 ChangeBinInt- 00:09:23.669 [2024-07-23 13:51:14.626294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.669 [2024-07-23 13:51:14.626327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.669 #39 NEW cov: 11748 ft: 14065 corp: 17/238b lim: 40 exec/s: 39 rss: 69Mb L: 9/23 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\001"- 00:09:23.669 [2024-07-23 13:51:14.666572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1708caca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.669 [2024-07-23 13:51:14.666605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.669 [2024-07-23 13:51:14.666676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cacacaff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.669 [2024-07-23 13:51:14.666696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.926 #45 NEW cov: 11748 ft: 14093 corp: 18/258b lim: 40 exec/s: 45 rss: 69Mb L: 20/23 MS: 1 ChangeBit- 00:09:23.926 [2024-07-23 13:51:14.726705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1708caca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.926 [2024-07-23 13:51:14.726740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.926 [2024-07-23 13:51:14.726811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cacacaff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.926 [2024-07-23 13:51:14.726836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.926 #46 NEW cov: 11748 ft: 14107 corp: 19/278b lim: 40 exec/s: 46 rss: 69Mb L: 20/23 MS: 1 ChangeBit- 00:09:23.926 [2024-07-23 13:51:14.776685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a636063 cdw11:63634163 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.926 [2024-07-23 13:51:14.776719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.926 #47 NEW cov: 11748 ft: 14154 corp: 20/289b lim: 40 exec/s: 47 rss: 69Mb L: 11/23 MS: 1 ShuffleBytes- 00:09:23.926 [2024-07-23 13:51:14.837072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0acacaca cdw11:cacecaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.926 [2024-07-23 13:51:14.837106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.926 [2024-07-23 13:51:14.837178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cacaca17 cdw11:0acacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.926 [2024-07-23 13:51:14.837197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.926 #48 NEW cov: 11748 ft: 14195 corp: 21/312b lim: 40 exec/s: 48 rss: 69Mb L: 23/23 MS: 1 CopyPart- 00:09:23.926 [2024-07-23 13:51:14.887531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:170acaca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.926 [2024-07-23 13:51:14.887563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.926 [2024-07-23 13:51:14.887633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cacacaff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.927 [2024-07-23 13:51:14.887653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:23.927 [2024-07-23 13:51:14.887712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffff007a cdw11:7a7a7a7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.927 [2024-07-23 13:51:14.887730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:23.927 [2024-07-23 13:51:14.887798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:7a7a7a7a cdw11:7a7a7a7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.927 [2024-07-23 13:51:14.887817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:23.927 #49 NEW cov: 11748 ft: 14582 corp: 22/348b lim: 40 exec/s: 49 rss: 69Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:09:23.927 [2024-07-23 13:51:14.937321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:170affff cdw11:ffffffca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.927 [2024-07-23 13:51:14.937353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:23.927 [2024-07-23 13:51:14.937423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cacacaca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:23.927 [2024-07-23 13:51:14.937442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.184 #50 NEW cov: 11748 ft: 14603 corp: 23/365b lim: 40 exec/s: 50 rss: 69Mb L: 17/36 MS: 1 InsertRepeatedBytes- 00:09:24.184 [2024-07-23 13:51:14.987248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7b3dffff cdw11:ffffffef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.184 [2024-07-23 13:51:14.987286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.184 #55 NEW cov: 11748 ft: 14609 corp: 24/375b lim: 40 exec/s: 55 rss: 69Mb L: 10/36 MS: 5 InsertByte-ChangeByte-InsertByte-ChangeBinInt-CrossOver- 00:09:24.184 [2024-07-23 13:51:15.027903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:170acaca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.184 [2024-07-23 13:51:15.027937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.184 [2024-07-23 13:51:15.028008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cacacaff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.184 [2024-07-23 13:51:15.028027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.184 [2024-07-23 13:51:15.028096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffff007a cdw11:7a7a7a7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.184 [2024-07-23 13:51:15.028114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:24.184 [2024-07-23 13:51:15.028182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:7a7a7b7a cdw11:7a7a7a7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.184 [2024-07-23 13:51:15.028200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:24.184 #56 NEW cov: 11748 ft: 14626 corp: 25/411b lim: 40 exec/s: 56 rss: 69Mb L: 36/36 MS: 1 ChangeBit- 00:09:24.184 [2024-07-23 13:51:15.087581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a010000 cdw11:00fe0100 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.184 [2024-07-23 13:51:15.087614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.184 #60 NEW cov: 11748 ft: 14635 corp: 26/419b lim: 40 exec/s: 60 rss: 70Mb L: 8/36 MS: 4 EraseBytes-ChangeByte-CopyPart-CopyPart- 00:09:24.184 [2024-07-23 13:51:15.147962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:1708caca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.184 [2024-07-23 13:51:15.147995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.184 [2024-07-23 13:51:15.148067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.184 [2024-07-23 13:51:15.148087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.184 #61 NEW cov: 11748 ft: 14640 corp: 27/436b lim: 40 exec/s: 61 rss: 70Mb L: 17/36 MS: 1 EraseBytes- 00:09:24.443 [2024-07-23 13:51:15.207948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7b3dffff cdw11:ffbfffef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.443 [2024-07-23 13:51:15.207983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.443 #62 NEW cov: 11748 ft: 14656 corp: 28/446b lim: 40 exec/s: 62 rss: 70Mb L: 10/36 MS: 1 ChangeBit- 00:09:24.443 [2024-07-23 13:51:15.268132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:3b7b3dff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.443 [2024-07-23 13:51:15.268167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.443 #63 NEW cov: 11748 ft: 14664 corp: 29/457b lim: 40 exec/s: 63 rss: 70Mb L: 11/36 MS: 1 InsertByte- 00:09:24.443 [2024-07-23 13:51:15.318827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:170acaca cdw11:cacacaca SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.443 [2024-07-23 13:51:15.318865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.443 [2024-07-23 13:51:15.318940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:cacacaff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.443 [2024-07-23 13:51:15.318960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.443 [2024-07-23 13:51:15.319029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffff007a cdw11:7a7a7a7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.443 [2024-07-23 13:51:15.319047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:24.443 [2024-07-23 13:51:15.319115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:7a7a7b7a cdw11:7a7a7a7a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.443 [2024-07-23 13:51:15.319133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:24.443 #64 NEW cov: 11748 ft: 14684 corp: 30/493b lim: 40 exec/s: 64 rss: 70Mb L: 36/36 MS: 1 ShuffleBytes- 00:09:24.443 [2024-07-23 13:51:15.378646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:277e3b3b cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.443 [2024-07-23 13:51:15.378681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:24.443 [2024-07-23 13:51:15.378752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:76767676 cdw11:76767676 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:24.443 [2024-07-23 13:51:15.378772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:24.443 #65 NEW cov: 11748 ft: 14707 corp: 31/515b lim: 40 exec/s: 32 rss: 70Mb L: 22/36 MS: 1 ShuffleBytes- 00:09:24.443 #65 DONE cov: 11748 ft: 14707 corp: 31/515b lim: 40 exec/s: 32 rss: 70Mb 00:09:24.443 ###### Recommended dictionary. ###### 00:09:24.443 "\377\377\377\377\377\377\377\000" # Uses: 1 00:09:24.443 "\377\377" # Uses: 0 00:09:24.443 "\001\000\000\000\000\000\000\001" # Uses: 0 00:09:24.443 ###### End of recommended dictionary. ###### 00:09:24.443 Done 65 runs in 2 second(s) 00:09:24.702 13:51:15 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_12.conf 00:09:24.702 13:51:15 -- ../common.sh@72 -- # (( i++ )) 00:09:24.702 13:51:15 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:24.702 13:51:15 -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:09:24.702 13:51:15 -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:09:24.702 13:51:15 -- nvmf/run.sh@24 -- # local timen=1 00:09:24.702 13:51:15 -- nvmf/run.sh@25 -- # local core=0x1 00:09:24.702 13:51:15 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:24.702 13:51:15 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:09:24.702 13:51:15 -- nvmf/run.sh@29 -- # printf %02d 13 00:09:24.702 13:51:15 -- nvmf/run.sh@29 -- # port=4413 00:09:24.702 13:51:15 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:24.702 13:51:15 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:09:24.702 13:51:15 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:24.702 13:51:15 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 -r /var/tmp/spdk13.sock 00:09:24.702 [2024-07-23 13:51:15.614754] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:24.702 [2024-07-23 13:51:15.614834] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3914324 ] 00:09:24.702 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.960 [2024-07-23 13:51:15.960724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.218 [2024-07-23 13:51:16.065440] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:25.218 [2024-07-23 13:51:16.065633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.218 [2024-07-23 13:51:16.128368] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.218 [2024-07-23 13:51:16.144596] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:09:25.218 INFO: Running with entropic power schedule (0xFF, 100). 00:09:25.218 INFO: Seed: 502757662 00:09:25.218 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:25.218 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:25.218 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:09:25.218 INFO: A corpus is not provided, starting from an empty corpus 00:09:25.218 #2 INITED exec/s: 0 rss: 61Mb 00:09:25.218 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:25.218 This may also happen if the target rejected all inputs we tried so far 00:09:25.218 [2024-07-23 13:51:16.200078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.218 [2024-07-23 13:51:16.200126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:25.218 [2024-07-23 13:51:16.200178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.218 [2024-07-23 13:51:16.200202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:25.218 [2024-07-23 13:51:16.200256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.218 [2024-07-23 13:51:16.200279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:25.218 [2024-07-23 13:51:16.200324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.218 [2024-07-23 13:51:16.200348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:25.218 [2024-07-23 13:51:16.200394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.218 [2024-07-23 13:51:16.200417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:25.735 NEW_FUNC[1/670]: 0x493190 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:09:25.735 NEW_FUNC[2/670]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:25.735 #12 NEW cov: 11508 ft: 11505 corp: 2/41b lim: 40 exec/s: 0 rss: 68Mb L: 40/40 MS: 5 CMP-EraseBytes-ShuffleBytes-EraseBytes-InsertRepeatedBytes- DE: "-\000\000\000"- 00:09:25.735 [2024-07-23 13:51:16.711391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.735 [2024-07-23 13:51:16.711456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:25.735 [2024-07-23 13:51:16.711508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f17 cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.735 [2024-07-23 13:51:16.711542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:25.735 [2024-07-23 13:51:16.711588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.735 [2024-07-23 13:51:16.711611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:25.735 [2024-07-23 13:51:16.711656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.735 [2024-07-23 13:51:16.711679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:25.735 [2024-07-23 13:51:16.711723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.735 [2024-07-23 13:51:16.711746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:25.993 #13 NEW cov: 11622 ft: 11903 corp: 3/81b lim: 40 exec/s: 0 rss: 68Mb L: 40/40 MS: 1 ChangeBit- 00:09:25.993 [2024-07-23 13:51:16.811472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f2d00 cdw11:00001f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.993 [2024-07-23 13:51:16.811520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:25.993 [2024-07-23 13:51:16.811571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.993 [2024-07-23 13:51:16.811594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:25.993 [2024-07-23 13:51:16.811640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.993 [2024-07-23 13:51:16.811663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:25.993 [2024-07-23 13:51:16.811708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.811731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:25.994 [2024-07-23 13:51:16.811775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.811798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:25.994 #14 NEW cov: 11628 ft: 12167 corp: 4/121b lim: 40 exec/s: 0 rss: 68Mb L: 40/40 MS: 1 PersAutoDict- DE: "-\000\000\000"- 00:09:25.994 [2024-07-23 13:51:16.881582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f301f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.881626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:25.994 [2024-07-23 13:51:16.881677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.881700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:25.994 [2024-07-23 13:51:16.881745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.881774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:25.994 [2024-07-23 13:51:16.881819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.881843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:25.994 [2024-07-23 13:51:16.881887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.881910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:25.994 #15 NEW cov: 11713 ft: 12407 corp: 5/161b lim: 40 exec/s: 0 rss: 68Mb L: 40/40 MS: 1 ChangeByte- 00:09:25.994 [2024-07-23 13:51:16.951829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f301f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.951871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:25.994 [2024-07-23 13:51:16.951923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.951948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:25.994 [2024-07-23 13:51:16.951994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.952018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:25.994 [2024-07-23 13:51:16.952062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.952086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:25.994 [2024-07-23 13:51:16.952130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:25.994 [2024-07-23 13:51:16.952154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:26.253 #16 NEW cov: 11713 ft: 12563 corp: 6/201b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 ShuffleBytes- 00:09:26.253 [2024-07-23 13:51:17.042036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f301f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.253 [2024-07-23 13:51:17.042077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:26.253 [2024-07-23 13:51:17.042128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.253 [2024-07-23 13:51:17.042152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:26.253 [2024-07-23 13:51:17.042197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.253 [2024-07-23 13:51:17.042236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:26.253 [2024-07-23 13:51:17.042281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.253 [2024-07-23 13:51:17.042310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:26.253 [2024-07-23 13:51:17.042355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.253 [2024-07-23 13:51:17.042377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:26.253 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:26.253 #17 NEW cov: 11730 ft: 12667 corp: 7/241b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 ChangeBinInt- 00:09:26.253 [2024-07-23 13:51:17.112249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f2d00 cdw11:00001f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.253 [2024-07-23 13:51:17.112290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:26.253 [2024-07-23 13:51:17.112341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.253 [2024-07-23 13:51:17.112365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:26.253 [2024-07-23 13:51:17.112409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.253 [2024-07-23 13:51:17.112433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:26.253 [2024-07-23 13:51:17.112479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.112502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:26.254 [2024-07-23 13:51:17.112546] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.112570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:26.254 #18 NEW cov: 11730 ft: 12708 corp: 8/281b lim: 40 exec/s: 18 rss: 69Mb L: 40/40 MS: 1 ShuffleBytes- 00:09:26.254 [2024-07-23 13:51:17.202501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.202542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:26.254 [2024-07-23 13:51:17.202593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.202617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:26.254 [2024-07-23 13:51:17.202662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.202685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:26.254 [2024-07-23 13:51:17.202729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.202752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:26.254 [2024-07-23 13:51:17.202797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.202825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:26.254 #19 NEW cov: 11730 ft: 12769 corp: 9/321b lim: 40 exec/s: 19 rss: 69Mb L: 40/40 MS: 1 ShuffleBytes- 00:09:26.254 [2024-07-23 13:51:17.272670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.272712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:26.254 [2024-07-23 13:51:17.272762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f17 cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.272786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:26.254 [2024-07-23 13:51:17.272831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:9f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.272854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:26.254 [2024-07-23 13:51:17.272898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.272921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:26.254 [2024-07-23 13:51:17.272966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.254 [2024-07-23 13:51:17.272989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:26.513 #20 NEW cov: 11730 ft: 12813 corp: 10/361b lim: 40 exec/s: 20 rss: 69Mb L: 40/40 MS: 1 ChangeBit- 00:09:26.513 [2024-07-23 13:51:17.362939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f301f cdw11:28000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.513 [2024-07-23 13:51:17.362980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:26.513 [2024-07-23 13:51:17.363031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.513 [2024-07-23 13:51:17.363054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:26.513 [2024-07-23 13:51:17.363099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.513 [2024-07-23 13:51:17.363121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:26.513 [2024-07-23 13:51:17.363166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:00000028 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.513 [2024-07-23 13:51:17.363188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:26.513 [2024-07-23 13:51:17.363243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.513 [2024-07-23 13:51:17.363266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:26.513 #21 NEW cov: 11730 ft: 12837 corp: 11/401b lim: 40 exec/s: 21 rss: 69Mb L: 40/40 MS: 1 ChangeBinInt- 00:09:26.513 [2024-07-23 13:51:17.453170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.513 [2024-07-23 13:51:17.453221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:26.513 [2024-07-23 13:51:17.453273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f17 cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.513 [2024-07-23 13:51:17.453297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:26.513 [2024-07-23 13:51:17.453341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1fe81f cdw11:9f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.513 [2024-07-23 13:51:17.453364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:26.513 [2024-07-23 13:51:17.453408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.513 [2024-07-23 13:51:17.453431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:26.513 [2024-07-23 13:51:17.453475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.513 [2024-07-23 13:51:17.453498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:26.513 #22 NEW cov: 11730 ft: 12878 corp: 12/441b lim: 40 exec/s: 22 rss: 69Mb L: 40/40 MS: 1 ChangeByte- 00:09:26.772 [2024-07-23 13:51:17.543264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f301f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.772 [2024-07-23 13:51:17.543305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:26.772 [2024-07-23 13:51:17.543356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.772 [2024-07-23 13:51:17.543380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:26.772 [2024-07-23 13:51:17.543426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.772 [2024-07-23 13:51:17.543449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:26.772 #23 NEW cov: 11730 ft: 13407 corp: 13/472b lim: 40 exec/s: 23 rss: 69Mb L: 31/40 MS: 1 EraseBytes- 00:09:26.772 [2024-07-23 13:51:17.623480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.772 [2024-07-23 13:51:17.623522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:26.772 [2024-07-23 13:51:17.623572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.772 [2024-07-23 13:51:17.623596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:26.772 [2024-07-23 13:51:17.623640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f000000 cdw11:281f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.772 [2024-07-23 13:51:17.623664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:26.772 #24 NEW cov: 11730 ft: 13419 corp: 14/501b lim: 40 exec/s: 24 rss: 69Mb L: 29/40 MS: 1 EraseBytes- 00:09:26.772 [2024-07-23 13:51:17.703855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f2d00 cdw11:00001f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.772 [2024-07-23 13:51:17.703895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:26.772 [2024-07-23 13:51:17.703947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.772 [2024-07-23 13:51:17.703970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:26.772 [2024-07-23 13:51:17.704015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.772 [2024-07-23 13:51:17.704038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:26.772 [2024-07-23 13:51:17.704082] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.772 [2024-07-23 13:51:17.704105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:26.772 [2024-07-23 13:51:17.704149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.772 [2024-07-23 13:51:17.704172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:26.772 #25 NEW cov: 11730 ft: 13452 corp: 15/541b lim: 40 exec/s: 25 rss: 69Mb L: 40/40 MS: 1 ShuffleBytes- 00:09:27.032 [2024-07-23 13:51:17.794098] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.032 [2024-07-23 13:51:17.794139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:27.032 [2024-07-23 13:51:17.794190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f17 cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.032 [2024-07-23 13:51:17.794224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:27.032 [2024-07-23 13:51:17.794270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.032 [2024-07-23 13:51:17.794294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:27.032 [2024-07-23 13:51:17.794338] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.032 [2024-07-23 13:51:17.794361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:27.032 [2024-07-23 13:51:17.794405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.032 [2024-07-23 13:51:17.794428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:27.032 #26 NEW cov: 11730 ft: 13474 corp: 16/581b lim: 40 exec/s: 26 rss: 69Mb L: 40/40 MS: 1 ShuffleBytes- 00:09:27.032 [2024-07-23 13:51:17.864257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f301f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.032 [2024-07-23 13:51:17.864298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:27.032 [2024-07-23 13:51:17.864354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1fa4be SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.032 [2024-07-23 13:51:17.864378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:27.032 [2024-07-23 13:51:17.864423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1195f436 cdw11:2d001f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:17.864446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:27.033 [2024-07-23 13:51:17.864490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:17.864513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:27.033 [2024-07-23 13:51:17.864558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:17.864581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:27.033 #27 NEW cov: 11730 ft: 13530 corp: 17/621b lim: 40 exec/s: 27 rss: 69Mb L: 40/40 MS: 1 CMP- DE: "\244\276\021\225\3646-\000"- 00:09:27.033 [2024-07-23 13:51:17.934288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:17.934331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:27.033 [2024-07-23 13:51:17.934381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1d1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:17.934406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:27.033 [2024-07-23 13:51:17.934450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f000000 cdw11:281f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:17.934474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:27.033 #28 NEW cov: 11730 ft: 13542 corp: 18/650b lim: 40 exec/s: 28 rss: 69Mb L: 29/40 MS: 1 ChangeBit- 00:09:27.033 [2024-07-23 13:51:17.995150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:2d171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:17.995188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:27.033 [2024-07-23 13:51:17.995265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:17.995285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:27.033 [2024-07-23 13:51:17.995356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:17171717 cdw11:17171717 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:17.995375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:27.033 #31 NEW cov: 11730 ft: 13704 corp: 19/677b lim: 40 exec/s: 31 rss: 69Mb L: 27/40 MS: 3 CrossOver-ShuffleBytes-InsertRepeatedBytes- 00:09:27.033 [2024-07-23 13:51:18.045415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:18.045454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:27.033 [2024-07-23 13:51:18.045524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f17 cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:18.045545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:27.033 [2024-07-23 13:51:18.045610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1fe81f cdw11:9f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:18.045629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:27.033 [2024-07-23 13:51:18.045699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.033 [2024-07-23 13:51:18.045718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:27.306 #32 NEW cov: 11737 ft: 13720 corp: 20/713b lim: 40 exec/s: 32 rss: 69Mb L: 36/40 MS: 1 EraseBytes- 00:09:27.306 [2024-07-23 13:51:18.105768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f301f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.306 [2024-07-23 13:51:18.105804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:27.306 [2024-07-23 13:51:18.105874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.306 [2024-07-23 13:51:18.105893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:27.306 [2024-07-23 13:51:18.105959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.306 [2024-07-23 13:51:18.105978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:27.306 [2024-07-23 13:51:18.106046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f0000 cdw11:1f001f28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.306 [2024-07-23 13:51:18.106064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:27.306 [2024-07-23 13:51:18.106129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.306 [2024-07-23 13:51:18.106147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:27.306 #33 NEW cov: 11737 ft: 13786 corp: 21/753b lim: 40 exec/s: 33 rss: 69Mb L: 40/40 MS: 1 ShuffleBytes- 00:09:27.306 [2024-07-23 13:51:18.155892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.306 [2024-07-23 13:51:18.155926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:27.306 [2024-07-23 13:51:18.156000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:1d1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.306 [2024-07-23 13:51:18.156020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:27.306 [2024-07-23 13:51:18.156088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000028 cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.306 [2024-07-23 13:51:18.156111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:27.306 [2024-07-23 13:51:18.156180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:1f1f1f1f cdw11:1f1f1f1f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.306 [2024-07-23 13:51:18.156198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:27.306 [2024-07-23 13:51:18.156272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:1f1f1f1f cdw11:1f1f0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:27.306 [2024-07-23 13:51:18.156291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:27.306 #34 NEW cov: 11737 ft: 13838 corp: 22/793b lim: 40 exec/s: 17 rss: 69Mb L: 40/40 MS: 1 CrossOver- 00:09:27.306 #34 DONE cov: 11737 ft: 13838 corp: 22/793b lim: 40 exec/s: 17 rss: 69Mb 00:09:27.306 ###### Recommended dictionary. ###### 00:09:27.306 "-\000\000\000" # Uses: 1 00:09:27.306 "\244\276\021\225\3646-\000" # Uses: 0 00:09:27.306 ###### End of recommended dictionary. ###### 00:09:27.306 Done 34 runs in 2 second(s) 00:09:27.579 13:51:18 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_13.conf 00:09:27.579 13:51:18 -- ../common.sh@72 -- # (( i++ )) 00:09:27.579 13:51:18 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:27.579 13:51:18 -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:09:27.579 13:51:18 -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:09:27.579 13:51:18 -- nvmf/run.sh@24 -- # local timen=1 00:09:27.579 13:51:18 -- nvmf/run.sh@25 -- # local core=0x1 00:09:27.579 13:51:18 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:27.579 13:51:18 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:09:27.579 13:51:18 -- nvmf/run.sh@29 -- # printf %02d 14 00:09:27.579 13:51:18 -- nvmf/run.sh@29 -- # port=4414 00:09:27.579 13:51:18 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:27.579 13:51:18 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:09:27.579 13:51:18 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:27.579 13:51:18 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 -r /var/tmp/spdk14.sock 00:09:27.579 [2024-07-23 13:51:18.382116] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:27.579 [2024-07-23 13:51:18.382190] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3914696 ] 00:09:27.579 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.837 [2024-07-23 13:51:18.752446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.096 [2024-07-23 13:51:18.860261] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:28.096 [2024-07-23 13:51:18.860453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.096 [2024-07-23 13:51:18.923098] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.096 [2024-07-23 13:51:18.939319] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:09:28.096 INFO: Running with entropic power schedule (0xFF, 100). 00:09:28.096 INFO: Seed: 3297755812 00:09:28.096 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:28.096 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:28.096 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:09:28.096 INFO: A corpus is not provided, starting from an empty corpus 00:09:28.096 #2 INITED exec/s: 0 rss: 61Mb 00:09:28.096 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:28.096 This may also happen if the target rejected all inputs we tried so far 00:09:28.096 [2024-07-23 13:51:18.994828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.096 [2024-07-23 13:51:18.994877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:28.096 [2024-07-23 13:51:18.994929] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.096 [2024-07-23 13:51:18.994955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:28.096 [2024-07-23 13:51:18.995001] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.096 [2024-07-23 13:51:18.995026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:28.096 [2024-07-23 13:51:18.995072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.096 [2024-07-23 13:51:18.995096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:28.663 NEW_FUNC[1/671]: 0x494d50 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:09:28.663 NEW_FUNC[2/671]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:28.663 #3 NEW cov: 11502 ft: 11504 corp: 2/32b lim: 35 exec/s: 0 rss: 68Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:09:28.663 [2024-07-23 13:51:19.506108] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.663 [2024-07-23 13:51:19.506174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:28.663 [2024-07-23 13:51:19.506236] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.663 [2024-07-23 13:51:19.506263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:28.663 [2024-07-23 13:51:19.506309] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.663 [2024-07-23 13:51:19.506334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:28.663 [2024-07-23 13:51:19.506379] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.663 [2024-07-23 13:51:19.506404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:28.663 #4 NEW cov: 11616 ft: 11946 corp: 3/64b lim: 35 exec/s: 0 rss: 69Mb L: 32/32 MS: 1 InsertByte- 00:09:28.663 [2024-07-23 13:51:19.606179] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.663 [2024-07-23 13:51:19.606237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:28.663 [2024-07-23 13:51:19.606290] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.663 [2024-07-23 13:51:19.606316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:28.663 [2024-07-23 13:51:19.606362] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.663 [2024-07-23 13:51:19.606392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:28.663 [2024-07-23 13:51:19.606438] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.663 [2024-07-23 13:51:19.606464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:28.663 #10 NEW cov: 11622 ft: 12295 corp: 4/97b lim: 35 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 InsertByte- 00:09:28.922 [2024-07-23 13:51:19.696377] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES HOST MEM BUFFER cid:4 cdw10:8000000d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.696423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:28.922 [2024-07-23 13:51:19.696474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.696499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:28.922 [2024-07-23 13:51:19.696545] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.696569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:28.922 [2024-07-23 13:51:19.696614] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.696638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:28.922 #11 NEW cov: 11707 ft: 12498 corp: 5/129b lim: 35 exec/s: 0 rss: 69Mb L: 32/33 MS: 1 InsertByte- 00:09:28.922 [2024-07-23 13:51:19.766630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.766673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:28.922 [2024-07-23 13:51:19.766724] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.766749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:28.922 [2024-07-23 13:51:19.766795] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.766820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:28.922 [2024-07-23 13:51:19.766865] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.766889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:28.922 #12 NEW cov: 11707 ft: 12656 corp: 6/162b lim: 35 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 ChangeBinInt- 00:09:28.922 [2024-07-23 13:51:19.856616] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000088 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.856659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:28.922 [2024-07-23 13:51:19.856710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000088 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.856735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:28.922 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:28.922 #13 NEW cov: 11724 ft: 13065 corp: 7/178b lim: 35 exec/s: 0 rss: 69Mb L: 16/33 MS: 1 InsertRepeatedBytes- 00:09:28.922 [2024-07-23 13:51:19.937068] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.937111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:28.922 [2024-07-23 13:51:19.937163] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.937188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:28.922 [2024-07-23 13:51:19.937243] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000002d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.937267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:28.922 [2024-07-23 13:51:19.937313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:28.922 [2024-07-23 13:51:19.937337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:29.181 #14 NEW cov: 11731 ft: 13144 corp: 8/210b lim: 35 exec/s: 14 rss: 69Mb L: 32/33 MS: 1 CMP- DE: "\001-6\365\260\016p\270"- 00:09:29.181 [2024-07-23 13:51:20.007234] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.007279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.181 [2024-07-23 13:51:20.007330] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.007357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.181 [2024-07-23 13:51:20.007405] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.007430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:29.181 [2024-07-23 13:51:20.007476] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.007500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:29.181 #15 NEW cov: 11731 ft: 13157 corp: 9/241b lim: 35 exec/s: 15 rss: 69Mb L: 31/33 MS: 1 EraseBytes- 00:09:29.181 [2024-07-23 13:51:20.097687] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.097736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.181 [2024-07-23 13:51:20.097788] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.097814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.181 [2024-07-23 13:51:20.097860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.097885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:29.181 [2024-07-23 13:51:20.097936] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.097962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:29.181 [2024-07-23 13:51:20.098008] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.098032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:29.181 #16 NEW cov: 11731 ft: 13283 corp: 10/276b lim: 35 exec/s: 16 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:09:29.181 [2024-07-23 13:51:20.167687] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.167730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.181 [2024-07-23 13:51:20.167781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.167807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.181 [2024-07-23 13:51:20.167854] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.167878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:29.181 [2024-07-23 13:51:20.167923] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.181 [2024-07-23 13:51:20.167948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:29.439 #17 NEW cov: 11731 ft: 13342 corp: 11/308b lim: 35 exec/s: 17 rss: 69Mb L: 32/35 MS: 1 InsertByte- 00:09:29.440 [2024-07-23 13:51:20.237795] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000088 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.237841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.440 [2024-07-23 13:51:20.237893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.237916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.440 [2024-07-23 13:51:20.237962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.237987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:29.440 #18 NEW cov: 11731 ft: 13548 corp: 12/333b lim: 35 exec/s: 18 rss: 69Mb L: 25/35 MS: 1 InsertRepeatedBytes- 00:09:29.440 [2024-07-23 13:51:20.328162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.328208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.440 [2024-07-23 13:51:20.328267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.328292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.440 [2024-07-23 13:51:20.328338] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.328368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:29.440 [2024-07-23 13:51:20.328414] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.328439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:29.440 #19 NEW cov: 11731 ft: 13557 corp: 13/364b lim: 35 exec/s: 19 rss: 69Mb L: 31/35 MS: 1 ChangeBit- 00:09:29.440 [2024-07-23 13:51:20.398472] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.398516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.440 [2024-07-23 13:51:20.398567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.398591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.440 [2024-07-23 13:51:20.398637] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.398662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:29.440 [2024-07-23 13:51:20.398706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.398730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:29.440 [2024-07-23 13:51:20.398776] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.440 [2024-07-23 13:51:20.398800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:29.699 #20 NEW cov: 11731 ft: 13616 corp: 14/399b lim: 35 exec/s: 20 rss: 69Mb L: 35/35 MS: 1 CrossOver- 00:09:29.699 [2024-07-23 13:51:20.488566] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.488609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.699 [2024-07-23 13:51:20.488660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.488686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.699 [2024-07-23 13:51:20.488732] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.488756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:29.699 [2024-07-23 13:51:20.488801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000fb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.488825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:29.699 #26 NEW cov: 11731 ft: 13636 corp: 15/430b lim: 35 exec/s: 26 rss: 69Mb L: 31/35 MS: 1 ChangeBit- 00:09:29.699 [2024-07-23 13:51:20.578804] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.578847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.699 [2024-07-23 13:51:20.578905] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.578930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.699 [2024-07-23 13:51:20.578975] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.579000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:29.699 [2024-07-23 13:51:20.579045] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000fb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.579070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:29.699 #27 NEW cov: 11731 ft: 13655 corp: 16/461b lim: 35 exec/s: 27 rss: 70Mb L: 31/35 MS: 1 CopyPart- 00:09:29.699 [2024-07-23 13:51:20.669073] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.669118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.699 [2024-07-23 13:51:20.669169] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.669194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.699 [2024-07-23 13:51:20.669249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.669274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:29.699 [2024-07-23 13:51:20.669320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.699 [2024-07-23 13:51:20.669345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:29.699 #28 NEW cov: 11731 ft: 13660 corp: 17/493b lim: 35 exec/s: 28 rss: 70Mb L: 32/35 MS: 1 ChangeByte- 00:09:29.957 [2024-07-23 13:51:20.739115] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000088 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.957 [2024-07-23 13:51:20.739163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.957 [2024-07-23 13:51:20.739224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000088 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.957 [2024-07-23 13:51:20.739250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.957 #29 NEW cov: 11731 ft: 13662 corp: 18/509b lim: 35 exec/s: 29 rss: 70Mb L: 16/35 MS: 1 ChangeBit- 00:09:29.957 [2024-07-23 13:51:20.809485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.957 [2024-07-23 13:51:20.809530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.957 [2024-07-23 13:51:20.809581] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.957 [2024-07-23 13:51:20.809607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.957 [2024-07-23 13:51:20.809652] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.957 [2024-07-23 13:51:20.809687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:29.957 [2024-07-23 13:51:20.809733] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.957 [2024-07-23 13:51:20.809757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:29.957 #30 NEW cov: 11731 ft: 13679 corp: 19/542b lim: 35 exec/s: 30 rss: 70Mb L: 33/35 MS: 1 CopyPart- 00:09:29.958 [2024-07-23 13:51:20.879498] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000016 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.958 [2024-07-23 13:51:20.879543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.958 [2024-07-23 13:51:20.879594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.958 [2024-07-23 13:51:20.879619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.958 #31 NEW cov: 11738 ft: 13691 corp: 20/561b lim: 35 exec/s: 31 rss: 70Mb L: 19/35 MS: 1 EraseBytes- 00:09:29.958 [2024-07-23 13:51:20.949862] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.958 [2024-07-23 13:51:20.949906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:29.958 [2024-07-23 13:51:20.949958] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.958 [2024-07-23 13:51:20.949983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:29.958 [2024-07-23 13:51:20.950028] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000f3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.958 [2024-07-23 13:51:20.950053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:29.958 [2024-07-23 13:51:20.950098] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000fb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:29.958 [2024-07-23 13:51:20.950122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:30.216 #32 pulse cov: 11738 ft: 13759 corp: 20/561b lim: 35 exec/s: 16 rss: 70Mb 00:09:30.216 #32 NEW cov: 11738 ft: 13759 corp: 21/592b lim: 35 exec/s: 16 rss: 70Mb L: 31/35 MS: 1 ChangeByte- 00:09:30.216 #32 DONE cov: 11738 ft: 13759 corp: 21/592b lim: 35 exec/s: 16 rss: 70Mb 00:09:30.216 ###### Recommended dictionary. ###### 00:09:30.216 "\001-6\365\260\016p\270" # Uses: 0 00:09:30.216 ###### End of recommended dictionary. ###### 00:09:30.216 Done 32 runs in 2 second(s) 00:09:30.216 13:51:21 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_14.conf 00:09:30.216 13:51:21 -- ../common.sh@72 -- # (( i++ )) 00:09:30.216 13:51:21 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:30.216 13:51:21 -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:09:30.216 13:51:21 -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:09:30.216 13:51:21 -- nvmf/run.sh@24 -- # local timen=1 00:09:30.216 13:51:21 -- nvmf/run.sh@25 -- # local core=0x1 00:09:30.216 13:51:21 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:30.216 13:51:21 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:09:30.216 13:51:21 -- nvmf/run.sh@29 -- # printf %02d 15 00:09:30.216 13:51:21 -- nvmf/run.sh@29 -- # port=4415 00:09:30.216 13:51:21 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:30.216 13:51:21 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:09:30.216 13:51:21 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:30.216 13:51:21 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 -r /var/tmp/spdk15.sock 00:09:30.216 [2024-07-23 13:51:21.217988] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:30.216 [2024-07-23 13:51:21.218061] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915058 ] 00:09:30.475 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.733 [2024-07-23 13:51:21.549228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.733 [2024-07-23 13:51:21.657104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:30.733 [2024-07-23 13:51:21.657307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.733 [2024-07-23 13:51:21.719814] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.733 [2024-07-23 13:51:21.736042] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:09:30.733 INFO: Running with entropic power schedule (0xFF, 100). 00:09:30.733 INFO: Seed: 1797792554 00:09:30.991 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:30.991 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:30.991 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:09:30.991 INFO: A corpus is not provided, starting from an empty corpus 00:09:30.991 #2 INITED exec/s: 0 rss: 61Mb 00:09:30.991 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:30.991 This may also happen if the target rejected all inputs we tried so far 00:09:30.991 [2024-07-23 13:51:21.813381] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:30.991 [2024-07-23 13:51:21.813442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.250 NEW_FUNC[1/669]: 0x496290 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:09:31.250 NEW_FUNC[2/669]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:31.250 #14 NEW cov: 11490 ft: 11489 corp: 2/9b lim: 35 exec/s: 0 rss: 68Mb L: 8/8 MS: 2 ChangeByte-InsertRepeatedBytes- 00:09:31.250 [2024-07-23 13:51:22.184226] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.250 [2024-07-23 13:51:22.184275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.250 NEW_FUNC[1/1]: 0xebaa30 in spdk_process_is_primary /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:290 00:09:31.250 #19 NEW cov: 11604 ft: 11947 corp: 3/16b lim: 35 exec/s: 0 rss: 69Mb L: 7/8 MS: 5 EraseBytes-EraseBytes-InsertByte-CopyPart-CrossOver- 00:09:31.250 [2024-07-23 13:51:22.264442] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.250 [2024-07-23 13:51:22.264483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.508 #20 NEW cov: 11610 ft: 12190 corp: 4/23b lim: 35 exec/s: 0 rss: 69Mb L: 7/8 MS: 1 EraseBytes- 00:09:31.508 [2024-07-23 13:51:22.324639] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.508 [2024-07-23 13:51:22.324679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.508 #21 NEW cov: 11695 ft: 12592 corp: 5/31b lim: 35 exec/s: 0 rss: 69Mb L: 8/8 MS: 1 InsertByte- 00:09:31.508 [2024-07-23 13:51:22.395205] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.508 [2024-07-23 13:51:22.395248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.508 #22 NEW cov: 11695 ft: 12743 corp: 6/39b lim: 35 exec/s: 0 rss: 69Mb L: 8/8 MS: 1 CrossOver- 00:09:31.508 [2024-07-23 13:51:22.465701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000032a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.508 [2024-07-23 13:51:22.465739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.508 #24 NEW cov: 11695 ft: 12791 corp: 7/47b lim: 35 exec/s: 0 rss: 69Mb L: 8/8 MS: 2 ChangeByte-CrossOver- 00:09:31.508 [2024-07-23 13:51:22.525812] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000032a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.508 [2024-07-23 13:51:22.525850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.766 #25 NEW cov: 11695 ft: 12882 corp: 8/55b lim: 35 exec/s: 0 rss: 69Mb L: 8/8 MS: 1 ChangeBit- 00:09:31.766 [2024-07-23 13:51:22.596245] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.766 [2024-07-23 13:51:22.596281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.766 #26 NEW cov: 11695 ft: 12908 corp: 9/63b lim: 35 exec/s: 0 rss: 69Mb L: 8/8 MS: 1 ChangeBit- 00:09:31.766 [2024-07-23 13:51:22.656512] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000049c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.766 [2024-07-23 13:51:22.656549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.766 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:31.766 #27 NEW cov: 11718 ft: 12993 corp: 10/71b lim: 35 exec/s: 0 rss: 69Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:31.766 [2024-07-23 13:51:22.717421] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.766 [2024-07-23 13:51:22.717457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.766 [2024-07-23 13:51:22.717572] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000018 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.766 [2024-07-23 13:51:22.717596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:31.766 [2024-07-23 13:51:22.717709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000018 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.766 [2024-07-23 13:51:22.717729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:31.766 #28 NEW cov: 11718 ft: 13382 corp: 11/95b lim: 35 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:09:31.766 [2024-07-23 13:51:22.787266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.766 [2024-07-23 13:51:22.787302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.024 #29 NEW cov: 11718 ft: 13448 corp: 12/103b lim: 35 exec/s: 29 rss: 69Mb L: 8/24 MS: 1 ChangeByte- 00:09:32.025 [2024-07-23 13:51:22.847598] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000032a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.025 [2024-07-23 13:51:22.847634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.025 #30 NEW cov: 11718 ft: 13454 corp: 13/112b lim: 35 exec/s: 30 rss: 69Mb L: 9/24 MS: 1 InsertByte- 00:09:32.025 [2024-07-23 13:51:22.908755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.025 [2024-07-23 13:51:22.908791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.025 [2024-07-23 13:51:22.908908] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000018 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.025 [2024-07-23 13:51:22.908929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.025 [2024-07-23 13:51:22.909038] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000018 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.025 [2024-07-23 13:51:22.909059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:32.025 #31 NEW cov: 11718 ft: 13538 corp: 14/137b lim: 35 exec/s: 31 rss: 69Mb L: 25/25 MS: 1 CrossOver- 00:09:32.025 [2024-07-23 13:51:22.978899] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.025 [2024-07-23 13:51:22.978934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.025 [2024-07-23 13:51:22.979045] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.025 [2024-07-23 13:51:22.979067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.025 #35 NEW cov: 11718 ft: 13738 corp: 15/154b lim: 35 exec/s: 35 rss: 70Mb L: 17/25 MS: 4 CrossOver-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:09:32.283 [2024-07-23 13:51:23.048846] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000003ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.283 [2024-07-23 13:51:23.048883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.283 #40 NEW cov: 11718 ft: 13795 corp: 16/166b lim: 35 exec/s: 40 rss: 70Mb L: 12/25 MS: 5 EraseBytes-ShuffleBytes-ChangeBit-ShuffleBytes-CrossOver- 00:09:32.283 [2024-07-23 13:51:23.109173] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000319 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.283 [2024-07-23 13:51:23.109208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.283 #41 NEW cov: 11718 ft: 13810 corp: 17/175b lim: 35 exec/s: 41 rss: 70Mb L: 9/25 MS: 1 InsertByte- 00:09:32.283 [2024-07-23 13:51:23.179703] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.283 [2024-07-23 13:51:23.179739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.283 #42 NEW cov: 11718 ft: 13840 corp: 18/183b lim: 35 exec/s: 42 rss: 70Mb L: 8/25 MS: 1 InsertByte- 00:09:32.283 [2024-07-23 13:51:23.240036] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000032a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.283 [2024-07-23 13:51:23.240072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.283 #43 NEW cov: 11718 ft: 13872 corp: 19/193b lim: 35 exec/s: 43 rss: 70Mb L: 10/25 MS: 1 CrossOver- 00:09:32.549 [2024-07-23 13:51:23.311202] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.549 [2024-07-23 13:51:23.311244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.549 [2024-07-23 13:51:23.311355] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.549 [2024-07-23 13:51:23.311385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.549 [2024-07-23 13:51:23.311491] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000596 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.549 [2024-07-23 13:51:23.311512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:32.549 #44 NEW cov: 11718 ft: 13886 corp: 20/214b lim: 35 exec/s: 44 rss: 70Mb L: 21/25 MS: 1 CrossOver- 00:09:32.549 [2024-07-23 13:51:23.370880] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000323 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.549 [2024-07-23 13:51:23.370915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.549 #45 NEW cov: 11718 ft: 13954 corp: 21/221b lim: 35 exec/s: 45 rss: 70Mb L: 7/25 MS: 1 ChangeByte- 00:09:32.549 [2024-07-23 13:51:23.431057] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.549 [2024-07-23 13:51:23.431094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.549 #46 NEW cov: 11718 ft: 13985 corp: 22/230b lim: 35 exec/s: 46 rss: 70Mb L: 9/25 MS: 1 CopyPart- 00:09:32.549 [2024-07-23 13:51:23.492105] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000049c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.549 [2024-07-23 13:51:23.492141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.549 [2024-07-23 13:51:23.492252] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.549 [2024-07-23 13:51:23.492274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.549 [2024-07-23 13:51:23.492383] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000095 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.549 [2024-07-23 13:51:23.492403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:32.549 #47 NEW cov: 11718 ft: 14018 corp: 23/251b lim: 35 exec/s: 47 rss: 70Mb L: 21/25 MS: 1 CrossOver- 00:09:32.549 [2024-07-23 13:51:23.561942] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.549 [2024-07-23 13:51:23.561979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.810 #48 NEW cov: 11718 ft: 14035 corp: 24/258b lim: 35 exec/s: 48 rss: 70Mb L: 7/25 MS: 1 ShuffleBytes- 00:09:32.810 [2024-07-23 13:51:23.622528] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000032a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.810 [2024-07-23 13:51:23.622565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.810 #49 NEW cov: 11718 ft: 14039 corp: 25/266b lim: 35 exec/s: 49 rss: 70Mb L: 8/25 MS: 1 ChangeByte- 00:09:32.810 [2024-07-23 13:51:23.683252] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:0000036a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.810 [2024-07-23 13:51:23.683290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.810 [2024-07-23 13:51:23.683394] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:0000013d SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.810 [2024-07-23 13:51:23.683417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.810 #50 NEW cov: 11718 ft: 14045 corp: 26/283b lim: 35 exec/s: 50 rss: 70Mb L: 17/25 MS: 1 CrossOver- 00:09:32.810 [2024-07-23 13:51:23.743147] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000323 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.811 [2024-07-23 13:51:23.743184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.811 #51 NEW cov: 11718 ft: 14061 corp: 27/290b lim: 35 exec/s: 25 rss: 70Mb L: 7/25 MS: 1 CrossOver- 00:09:32.811 #51 DONE cov: 11718 ft: 14061 corp: 27/290b lim: 35 exec/s: 25 rss: 70Mb 00:09:32.811 Done 51 runs in 2 second(s) 00:09:33.069 13:51:23 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_15.conf 00:09:33.069 13:51:23 -- ../common.sh@72 -- # (( i++ )) 00:09:33.069 13:51:23 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:33.069 13:51:23 -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:09:33.069 13:51:23 -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:09:33.069 13:51:23 -- nvmf/run.sh@24 -- # local timen=1 00:09:33.069 13:51:23 -- nvmf/run.sh@25 -- # local core=0x1 00:09:33.069 13:51:23 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:33.069 13:51:23 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:09:33.069 13:51:23 -- nvmf/run.sh@29 -- # printf %02d 16 00:09:33.069 13:51:23 -- nvmf/run.sh@29 -- # port=4416 00:09:33.069 13:51:23 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:33.069 13:51:23 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:09:33.069 13:51:23 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:33.069 13:51:23 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 -r /var/tmp/spdk16.sock 00:09:33.069 [2024-07-23 13:51:23.971332] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:33.069 [2024-07-23 13:51:23.971402] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915428 ] 00:09:33.069 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.328 [2024-07-23 13:51:24.207403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.328 [2024-07-23 13:51:24.292029] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:33.328 [2024-07-23 13:51:24.292224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.603 [2024-07-23 13:51:24.355083] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.603 [2024-07-23 13:51:24.371314] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:09:33.603 INFO: Running with entropic power schedule (0xFF, 100). 00:09:33.603 INFO: Seed: 139825008 00:09:33.603 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:33.603 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:33.603 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:09:33.603 INFO: A corpus is not provided, starting from an empty corpus 00:09:33.603 #2 INITED exec/s: 0 rss: 61Mb 00:09:33.603 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:33.603 This may also happen if the target rejected all inputs we tried so far 00:09:33.603 [2024-07-23 13:51:24.426353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.603 [2024-07-23 13:51:24.426402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.169 NEW_FUNC[1/671]: 0x497740 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:09:34.169 NEW_FUNC[2/671]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:34.169 #14 NEW cov: 11594 ft: 11595 corp: 2/22b lim: 105 exec/s: 0 rss: 68Mb L: 21/21 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:09:34.169 [2024-07-23 13:51:24.927820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.169 [2024-07-23 13:51:24.927881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.169 [2024-07-23 13:51:24.927935] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.169 [2024-07-23 13:51:24.927962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.169 [2024-07-23 13:51:24.928006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.169 [2024-07-23 13:51:24.928030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:34.169 [2024-07-23 13:51:24.928074] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.169 [2024-07-23 13:51:24.928098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:34.169 #15 NEW cov: 11707 ft: 12735 corp: 3/111b lim: 105 exec/s: 0 rss: 68Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:09:34.169 [2024-07-23 13:51:25.017662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1 len:257 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.169 [2024-07-23 13:51:25.017707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.169 #16 NEW cov: 11713 ft: 12997 corp: 4/132b lim: 105 exec/s: 0 rss: 69Mb L: 21/89 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\001"- 00:09:34.169 [2024-07-23 13:51:25.107853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:137438953472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.169 [2024-07-23 13:51:25.107896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.169 #17 NEW cov: 11798 ft: 13252 corp: 5/153b lim: 105 exec/s: 0 rss: 69Mb L: 21/89 MS: 1 ChangeBit- 00:09:34.169 [2024-07-23 13:51:25.178321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.169 [2024-07-23 13:51:25.178363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.169 [2024-07-23 13:51:25.178412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.169 [2024-07-23 13:51:25.178439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.169 [2024-07-23 13:51:25.178486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.169 [2024-07-23 13:51:25.178512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:34.169 [2024-07-23 13:51:25.178556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.169 [2024-07-23 13:51:25.178580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:34.428 #18 NEW cov: 11798 ft: 13313 corp: 6/243b lim: 105 exec/s: 0 rss: 69Mb L: 90/90 MS: 1 InsertByte- 00:09:34.428 [2024-07-23 13:51:25.278325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1 len:257 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.428 [2024-07-23 13:51:25.278373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.428 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:34.428 #19 NEW cov: 11815 ft: 13419 corp: 7/264b lim: 105 exec/s: 0 rss: 69Mb L: 21/90 MS: 1 CrossOver- 00:09:34.428 [2024-07-23 13:51:25.368636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4294967297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.428 [2024-07-23 13:51:25.368678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.428 #20 NEW cov: 11815 ft: 13520 corp: 8/293b lim: 105 exec/s: 20 rss: 69Mb L: 29/90 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\001"- 00:09:34.428 [2024-07-23 13:51:25.438800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:137438953472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.428 [2024-07-23 13:51:25.438843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.686 #21 NEW cov: 11815 ft: 13572 corp: 9/314b lim: 105 exec/s: 21 rss: 69Mb L: 21/90 MS: 1 ChangeBit- 00:09:34.686 [2024-07-23 13:51:25.529198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.686 [2024-07-23 13:51:25.529248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.686 [2024-07-23 13:51:25.529297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12804210592339571121 len:45490 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.686 [2024-07-23 13:51:25.529324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.686 [2024-07-23 13:51:25.529371] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12804210592339571121 len:45490 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.686 [2024-07-23 13:51:25.529395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:34.686 #22 NEW cov: 11815 ft: 13905 corp: 10/387b lim: 105 exec/s: 22 rss: 69Mb L: 73/90 MS: 1 InsertRepeatedBytes- 00:09:34.686 [2024-07-23 13:51:25.609349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.686 [2024-07-23 13:51:25.609391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.686 [2024-07-23 13:51:25.609442] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.686 [2024-07-23 13:51:25.609469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.686 #23 NEW cov: 11815 ft: 14278 corp: 11/432b lim: 105 exec/s: 23 rss: 69Mb L: 45/90 MS: 1 EraseBytes- 00:09:34.686 [2024-07-23 13:51:25.689665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.686 [2024-07-23 13:51:25.689708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.686 [2024-07-23 13:51:25.689758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12804210592339571121 len:45490 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.686 [2024-07-23 13:51:25.689784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.686 [2024-07-23 13:51:25.689831] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12804210592339571121 len:45490 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.686 [2024-07-23 13:51:25.689861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:34.945 #24 NEW cov: 11815 ft: 14294 corp: 12/513b lim: 105 exec/s: 24 rss: 69Mb L: 81/90 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\001"- 00:09:34.945 [2024-07-23 13:51:25.789773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:1 len:257 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.945 [2024-07-23 13:51:25.789815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.945 #25 NEW cov: 11815 ft: 14330 corp: 13/534b lim: 105 exec/s: 25 rss: 70Mb L: 21/90 MS: 1 ChangeByte- 00:09:34.945 [2024-07-23 13:51:25.880032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:256 len:2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.945 [2024-07-23 13:51:25.880074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.945 #31 NEW cov: 11815 ft: 14345 corp: 14/563b lim: 105 exec/s: 31 rss: 70Mb L: 29/90 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\001"- 00:09:34.945 [2024-07-23 13:51:25.951029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.945 [2024-07-23 13:51:25.951074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:34.945 [2024-07-23 13:51:25.951148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.945 [2024-07-23 13:51:25.951173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:34.945 [2024-07-23 13:51:25.951253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.945 [2024-07-23 13:51:25.951279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.203 #32 NEW cov: 11815 ft: 14515 corp: 15/636b lim: 105 exec/s: 32 rss: 70Mb L: 73/90 MS: 1 InsertRepeatedBytes- 00:09:35.203 [2024-07-23 13:51:26.001219] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.203 [2024-07-23 13:51:26.001257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.203 [2024-07-23 13:51:26.001318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.203 [2024-07-23 13:51:26.001340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.203 [2024-07-23 13:51:26.001405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.203 [2024-07-23 13:51:26.001427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.203 [2024-07-23 13:51:26.001493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.203 [2024-07-23 13:51:26.001515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:35.203 #33 NEW cov: 11815 ft: 14553 corp: 16/723b lim: 105 exec/s: 33 rss: 70Mb L: 87/90 MS: 1 CrossOver- 00:09:35.203 [2024-07-23 13:51:26.071011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:4294967297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.203 [2024-07-23 13:51:26.071052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.203 #34 NEW cov: 11815 ft: 14610 corp: 17/752b lim: 105 exec/s: 34 rss: 70Mb L: 29/90 MS: 1 CrossOver- 00:09:35.203 [2024-07-23 13:51:26.131509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.203 [2024-07-23 13:51:26.131547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.203 [2024-07-23 13:51:26.131595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12804210592339571121 len:45490 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.203 [2024-07-23 13:51:26.131618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.203 [2024-07-23 13:51:26.131688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12804210592339571121 len:45490 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.203 [2024-07-23 13:51:26.131709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.203 #40 NEW cov: 11815 ft: 14651 corp: 18/833b lim: 105 exec/s: 40 rss: 70Mb L: 81/90 MS: 1 ChangeBinInt- 00:09:35.203 [2024-07-23 13:51:26.191358] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.203 [2024-07-23 13:51:26.191395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.203 #41 NEW cov: 11815 ft: 14672 corp: 19/865b lim: 105 exec/s: 41 rss: 70Mb L: 32/90 MS: 1 EraseBytes- 00:09:35.462 [2024-07-23 13:51:26.241639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.462 [2024-07-23 13:51:26.241676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.462 [2024-07-23 13:51:26.241734] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.463 [2024-07-23 13:51:26.241757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.463 #42 NEW cov: 11815 ft: 14681 corp: 20/924b lim: 105 exec/s: 42 rss: 70Mb L: 59/90 MS: 1 EraseBytes- 00:09:35.463 [2024-07-23 13:51:26.291683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167774208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.463 [2024-07-23 13:51:26.291720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.463 #43 NEW cov: 11822 ft: 14718 corp: 21/956b lim: 105 exec/s: 43 rss: 70Mb L: 32/90 MS: 1 ChangeBit- 00:09:35.463 [2024-07-23 13:51:26.352291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.463 [2024-07-23 13:51:26.352327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.463 [2024-07-23 13:51:26.352391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:12804210589370003889 len:45490 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.463 [2024-07-23 13:51:26.352414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.463 [2024-07-23 13:51:26.352486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:12804210592339571121 len:45490 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.463 [2024-07-23 13:51:26.352508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.463 [2024-07-23 13:51:26.352580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:2981167104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.463 [2024-07-23 13:51:26.352604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:35.463 #44 NEW cov: 11822 ft: 14730 corp: 22/1060b lim: 105 exec/s: 44 rss: 70Mb L: 104/104 MS: 1 CrossOver- 00:09:35.463 [2024-07-23 13:51:26.412296] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.463 [2024-07-23 13:51:26.412334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:35.463 [2024-07-23 13:51:26.412388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.463 [2024-07-23 13:51:26.412407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:35.463 [2024-07-23 13:51:26.412480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.463 [2024-07-23 13:51:26.412501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:35.463 #45 NEW cov: 11822 ft: 14786 corp: 23/1124b lim: 105 exec/s: 22 rss: 70Mb L: 64/104 MS: 1 CrossOver- 00:09:35.463 #45 DONE cov: 11822 ft: 14786 corp: 23/1124b lim: 105 exec/s: 22 rss: 70Mb 00:09:35.463 ###### Recommended dictionary. ###### 00:09:35.463 "\001\000\000\000\000\000\000\001" # Uses: 3 00:09:35.463 ###### End of recommended dictionary. ###### 00:09:35.463 Done 45 runs in 2 second(s) 00:09:35.722 13:51:26 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_16.conf 00:09:35.722 13:51:26 -- ../common.sh@72 -- # (( i++ )) 00:09:35.722 13:51:26 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:35.722 13:51:26 -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:09:35.722 13:51:26 -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:09:35.722 13:51:26 -- nvmf/run.sh@24 -- # local timen=1 00:09:35.722 13:51:26 -- nvmf/run.sh@25 -- # local core=0x1 00:09:35.722 13:51:26 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:35.722 13:51:26 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:09:35.722 13:51:26 -- nvmf/run.sh@29 -- # printf %02d 17 00:09:35.722 13:51:26 -- nvmf/run.sh@29 -- # port=4417 00:09:35.722 13:51:26 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:35.722 13:51:26 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:09:35.722 13:51:26 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:35.722 13:51:26 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 -r /var/tmp/spdk17.sock 00:09:35.722 [2024-07-23 13:51:26.654979] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:35.722 [2024-07-23 13:51:26.655060] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915795 ] 00:09:35.722 EAL: No free 2048 kB hugepages reported on node 1 00:09:35.980 [2024-07-23 13:51:26.896558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.980 [2024-07-23 13:51:26.981441] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:35.980 [2024-07-23 13:51:26.981626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.239 [2024-07-23 13:51:27.044250] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.239 [2024-07-23 13:51:27.060476] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:09:36.239 INFO: Running with entropic power schedule (0xFF, 100). 00:09:36.239 INFO: Seed: 2828822899 00:09:36.239 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:36.239 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:36.239 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:09:36.239 INFO: A corpus is not provided, starting from an empty corpus 00:09:36.239 #2 INITED exec/s: 0 rss: 61Mb 00:09:36.239 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:36.239 This may also happen if the target rejected all inputs we tried so far 00:09:36.239 [2024-07-23 13:51:27.116298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.239 [2024-07-23 13:51:27.116340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.239 [2024-07-23 13:51:27.116401] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.239 [2024-07-23 13:51:27.116423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.239 [2024-07-23 13:51:27.116486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.239 [2024-07-23 13:51:27.116508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.498 NEW_FUNC[1/672]: 0x49aa30 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:09:36.498 NEW_FUNC[2/672]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:36.498 #16 NEW cov: 11615 ft: 11616 corp: 2/94b lim: 120 exec/s: 0 rss: 68Mb L: 93/93 MS: 4 ChangeBit-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:09:36.498 [2024-07-23 13:51:27.447062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.498 [2024-07-23 13:51:27.447114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.498 [2024-07-23 13:51:27.447181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.498 [2024-07-23 13:51:27.447203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.498 [2024-07-23 13:51:27.447277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.498 [2024-07-23 13:51:27.447303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.498 #22 NEW cov: 11728 ft: 12136 corp: 3/187b lim: 120 exec/s: 0 rss: 68Mb L: 93/93 MS: 1 ChangeByte- 00:09:36.498 [2024-07-23 13:51:27.507182] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.498 [2024-07-23 13:51:27.507229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.498 [2024-07-23 13:51:27.507276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.498 [2024-07-23 13:51:27.507298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.498 [2024-07-23 13:51:27.507362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.498 [2024-07-23 13:51:27.507387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.757 #23 NEW cov: 11734 ft: 12418 corp: 4/277b lim: 120 exec/s: 0 rss: 69Mb L: 90/93 MS: 1 EraseBytes- 00:09:36.757 [2024-07-23 13:51:27.567538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.567577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.757 [2024-07-23 13:51:27.567630] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.567652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.757 [2024-07-23 13:51:27.567714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.567733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.757 [2024-07-23 13:51:27.567795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.567816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:36.757 #29 NEW cov: 11819 ft: 13030 corp: 5/375b lim: 120 exec/s: 0 rss: 69Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:09:36.757 [2024-07-23 13:51:27.617500] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.617538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.757 [2024-07-23 13:51:27.617580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.617602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.757 [2024-07-23 13:51:27.617663] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.617685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.757 #30 NEW cov: 11819 ft: 13168 corp: 6/465b lim: 120 exec/s: 0 rss: 69Mb L: 90/98 MS: 1 ChangeBinInt- 00:09:36.757 [2024-07-23 13:51:27.677825] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.677863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.757 [2024-07-23 13:51:27.677920] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.677942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.757 [2024-07-23 13:51:27.678003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955726732286811 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.678024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.757 [2024-07-23 13:51:27.678086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.678109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:36.757 #31 NEW cov: 11819 ft: 13287 corp: 7/564b lim: 120 exec/s: 0 rss: 69Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:09:36.757 [2024-07-23 13:51:27.727955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.727992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:36.757 [2024-07-23 13:51:27.728037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.728060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:36.757 [2024-07-23 13:51:27.728125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955726732286811 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.728147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:36.757 [2024-07-23 13:51:27.728220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:36.757 [2024-07-23 13:51:27.728242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:36.757 #32 NEW cov: 11819 ft: 13353 corp: 8/663b lim: 120 exec/s: 0 rss: 69Mb L: 99/99 MS: 1 ShuffleBytes- 00:09:37.016 [2024-07-23 13:51:27.788102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.016 [2024-07-23 13:51:27.788140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.016 [2024-07-23 13:51:27.788202] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.016 [2024-07-23 13:51:27.788229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.016 [2024-07-23 13:51:27.788293] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.016 [2024-07-23 13:51:27.788313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.016 [2024-07-23 13:51:27.788376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.016 [2024-07-23 13:51:27.788397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.016 #33 NEW cov: 11819 ft: 13393 corp: 9/761b lim: 120 exec/s: 0 rss: 69Mb L: 98/99 MS: 1 ChangeBinInt- 00:09:37.016 [2024-07-23 13:51:27.848257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.016 [2024-07-23 13:51:27.848295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.016 [2024-07-23 13:51:27.848353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.016 [2024-07-23 13:51:27.848375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.016 [2024-07-23 13:51:27.848440] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.016 [2024-07-23 13:51:27.848464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.016 [2024-07-23 13:51:27.848526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.017 [2024-07-23 13:51:27.848548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.017 #34 NEW cov: 11819 ft: 13518 corp: 10/859b lim: 120 exec/s: 0 rss: 69Mb L: 98/99 MS: 1 CrossOver- 00:09:37.017 [2024-07-23 13:51:27.908116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.017 [2024-07-23 13:51:27.908153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.017 [2024-07-23 13:51:27.908208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.017 [2024-07-23 13:51:27.908234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.017 #35 NEW cov: 11819 ft: 13909 corp: 11/924b lim: 120 exec/s: 0 rss: 69Mb L: 65/99 MS: 1 EraseBytes- 00:09:37.017 [2024-07-23 13:51:27.968668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.017 [2024-07-23 13:51:27.968705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.017 [2024-07-23 13:51:27.968763] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.017 [2024-07-23 13:51:27.968785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.017 [2024-07-23 13:51:27.968849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.017 [2024-07-23 13:51:27.968869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.017 [2024-07-23 13:51:27.968934] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.017 [2024-07-23 13:51:27.968957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.017 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:37.017 #36 NEW cov: 11842 ft: 13949 corp: 12/1023b lim: 120 exec/s: 0 rss: 69Mb L: 99/99 MS: 1 InsertByte- 00:09:37.017 [2024-07-23 13:51:28.018670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.017 [2024-07-23 13:51:28.018708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.017 [2024-07-23 13:51:28.018757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.017 [2024-07-23 13:51:28.018778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.017 [2024-07-23 13:51:28.018842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.017 [2024-07-23 13:51:28.018863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.276 #37 NEW cov: 11842 ft: 13969 corp: 13/1113b lim: 120 exec/s: 0 rss: 69Mb L: 90/99 MS: 1 ChangeBit- 00:09:37.276 [2024-07-23 13:51:28.068629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.068666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.276 [2024-07-23 13:51:28.068712] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.068733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.276 #38 NEW cov: 11842 ft: 13992 corp: 14/1179b lim: 120 exec/s: 38 rss: 69Mb L: 66/99 MS: 1 InsertByte- 00:09:37.276 [2024-07-23 13:51:28.129156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.129193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.276 [2024-07-23 13:51:28.129248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.129272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.276 [2024-07-23 13:51:28.129334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.129355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.276 [2024-07-23 13:51:28.129419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.129440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.276 #39 NEW cov: 11842 ft: 14009 corp: 15/1277b lim: 120 exec/s: 39 rss: 69Mb L: 98/99 MS: 1 ChangeBit- 00:09:37.276 [2024-07-23 13:51:28.179118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.179155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.276 [2024-07-23 13:51:28.179197] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.179225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.276 [2024-07-23 13:51:28.179287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.179307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.276 #40 NEW cov: 11842 ft: 14024 corp: 16/1370b lim: 120 exec/s: 40 rss: 69Mb L: 93/99 MS: 1 ChangeByte- 00:09:37.276 [2024-07-23 13:51:28.229437] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.229474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.276 [2024-07-23 13:51:28.229534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.229561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.276 [2024-07-23 13:51:28.229623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955726732286811 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.229643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.276 [2024-07-23 13:51:28.229704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264976731 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.229725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.276 #41 NEW cov: 11842 ft: 14044 corp: 17/1469b lim: 120 exec/s: 41 rss: 70Mb L: 99/99 MS: 1 ChangeBit- 00:09:37.276 [2024-07-23 13:51:28.289456] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.289493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.276 [2024-07-23 13:51:28.289543] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.289564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.276 [2024-07-23 13:51:28.289626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:25180 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.276 [2024-07-23 13:51:28.289648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.535 #42 NEW cov: 11842 ft: 14055 corp: 18/1562b lim: 120 exec/s: 42 rss: 70Mb L: 93/99 MS: 1 ChangeBinInt- 00:09:37.535 [2024-07-23 13:51:28.349499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.349536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.535 [2024-07-23 13:51:28.349580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.349600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.535 [2024-07-23 13:51:28.349661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.349682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.535 #43 NEW cov: 11842 ft: 14082 corp: 19/1655b lim: 120 exec/s: 43 rss: 70Mb L: 93/99 MS: 1 CMP- DE: "\010\000\000\000"- 00:09:37.535 [2024-07-23 13:51:28.389863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:100447932844032 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.389901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.535 [2024-07-23 13:51:28.389950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.389973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.535 [2024-07-23 13:51:28.390035] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955726732286811 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.390060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.535 [2024-07-23 13:51:28.390123] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.390147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.535 #44 NEW cov: 11842 ft: 14083 corp: 20/1754b lim: 120 exec/s: 44 rss: 70Mb L: 99/99 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:09:37.535 [2024-07-23 13:51:28.439970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.440006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.535 [2024-07-23 13:51:28.440052] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1808504322059213081 len:6426 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.440074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.535 [2024-07-23 13:51:28.440136] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.440157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.535 [2024-07-23 13:51:28.440224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.440246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.535 #45 NEW cov: 11842 ft: 14094 corp: 21/1872b lim: 120 exec/s: 45 rss: 70Mb L: 118/118 MS: 1 InsertRepeatedBytes- 00:09:37.535 [2024-07-23 13:51:28.490164] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.490202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.535 [2024-07-23 13:51:28.490257] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.535 [2024-07-23 13:51:28.490282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.536 [2024-07-23 13:51:28.490345] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.536 [2024-07-23 13:51:28.490366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.536 [2024-07-23 13:51:28.490430] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.536 [2024-07-23 13:51:28.490453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.536 #46 NEW cov: 11842 ft: 14120 corp: 22/1970b lim: 120 exec/s: 46 rss: 70Mb L: 98/118 MS: 1 ShuffleBytes- 00:09:37.536 [2024-07-23 13:51:28.540337] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.536 [2024-07-23 13:51:28.540374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.536 [2024-07-23 13:51:28.540424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:8816741143440743259 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.536 [2024-07-23 13:51:28.540446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.536 [2024-07-23 13:51:28.540507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.536 [2024-07-23 13:51:28.540529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.536 [2024-07-23 13:51:28.540571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.536 [2024-07-23 13:51:28.540592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.794 #47 NEW cov: 11842 ft: 14196 corp: 23/2087b lim: 120 exec/s: 47 rss: 70Mb L: 117/118 MS: 1 CrossOver- 00:09:37.794 [2024-07-23 13:51:28.600496] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.794 [2024-07-23 13:51:28.600532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.794 [2024-07-23 13:51:28.600595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.794 [2024-07-23 13:51:28.600617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.794 [2024-07-23 13:51:28.600678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.794 [2024-07-23 13:51:28.600699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.794 [2024-07-23 13:51:28.600761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.794 [2024-07-23 13:51:28.600782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.794 #48 NEW cov: 11842 ft: 14216 corp: 24/2185b lim: 120 exec/s: 48 rss: 70Mb L: 98/118 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:09:37.794 [2024-07-23 13:51:28.660480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.794 [2024-07-23 13:51:28.660515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.794 [2024-07-23 13:51:28.660560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.794 [2024-07-23 13:51:28.660582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.794 [2024-07-23 13:51:28.660645] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.794 [2024-07-23 13:51:28.660666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.794 #49 NEW cov: 11842 ft: 14224 corp: 25/2278b lim: 120 exec/s: 49 rss: 70Mb L: 93/118 MS: 1 ChangeBit- 00:09:37.794 [2024-07-23 13:51:28.720810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.794 [2024-07-23 13:51:28.720846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.794 [2024-07-23 13:51:28.720921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.794 [2024-07-23 13:51:28.720943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.794 [2024-07-23 13:51:28.721002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582864077957847899 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.794 [2024-07-23 13:51:28.721023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:37.794 [2024-07-23 13:51:28.721084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.794 [2024-07-23 13:51:28.721105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:37.795 #50 NEW cov: 11842 ft: 14236 corp: 26/2381b lim: 120 exec/s: 50 rss: 70Mb L: 103/118 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:09:37.795 [2024-07-23 13:51:28.780846] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.795 [2024-07-23 13:51:28.780884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:37.795 [2024-07-23 13:51:28.780931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.795 [2024-07-23 13:51:28.780952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:37.795 [2024-07-23 13:51:28.781017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:37.795 [2024-07-23 13:51:28.781038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.053 #51 NEW cov: 11842 ft: 14265 corp: 27/2475b lim: 120 exec/s: 51 rss: 70Mb L: 94/118 MS: 1 InsertByte- 00:09:38.053 [2024-07-23 13:51:28.841161] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:28.841200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.053 [2024-07-23 13:51:28.841268] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:28.841289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.053 [2024-07-23 13:51:28.841351] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:28.841372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.053 [2024-07-23 13:51:28.841436] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:28.841457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.053 #52 NEW cov: 11842 ft: 14278 corp: 28/2593b lim: 120 exec/s: 52 rss: 70Mb L: 118/118 MS: 1 CopyPart- 00:09:38.053 [2024-07-23 13:51:28.890984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:28.891022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.053 [2024-07-23 13:51:28.891077] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:28.891100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.053 #53 NEW cov: 11842 ft: 14284 corp: 29/2648b lim: 120 exec/s: 53 rss: 70Mb L: 55/118 MS: 1 EraseBytes- 00:09:38.053 [2024-07-23 13:51:28.941308] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:2049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:28.941344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.053 [2024-07-23 13:51:28.941391] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:28.941413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.053 [2024-07-23 13:51:28.941477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:28.941497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.053 #54 NEW cov: 11842 ft: 14342 corp: 30/2742b lim: 120 exec/s: 54 rss: 70Mb L: 94/118 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:09:38.053 [2024-07-23 13:51:29.001315] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:29.001352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.053 [2024-07-23 13:51:29.001395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:29.001416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.053 #55 NEW cov: 11842 ft: 14403 corp: 31/2802b lim: 120 exec/s: 55 rss: 71Mb L: 60/118 MS: 1 EraseBytes- 00:09:38.053 [2024-07-23 13:51:29.051791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:29.051829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.053 [2024-07-23 13:51:29.051887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:29.051909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.053 [2024-07-23 13:51:29.051965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:29.051985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.053 [2024-07-23 13:51:29.052049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:6582955726738250587 len:23420 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.053 [2024-07-23 13:51:29.052068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:38.312 #56 NEW cov: 11842 ft: 14437 corp: 32/2900b lim: 120 exec/s: 56 rss: 71Mb L: 98/118 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:09:38.312 [2024-07-23 13:51:29.111792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.312 [2024-07-23 13:51:29.111829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:38.312 [2024-07-23 13:51:29.111877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39516 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.312 [2024-07-23 13:51:29.111898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:38.312 [2024-07-23 13:51:29.111960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:6582955728264977243 len:23388 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:38.312 [2024-07-23 13:51:29.111982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:38.312 #57 NEW cov: 11842 ft: 14450 corp: 33/2976b lim: 120 exec/s: 28 rss: 71Mb L: 76/118 MS: 1 InsertRepeatedBytes- 00:09:38.312 #57 DONE cov: 11842 ft: 14450 corp: 33/2976b lim: 120 exec/s: 28 rss: 71Mb 00:09:38.312 ###### Recommended dictionary. ###### 00:09:38.312 "\010\000\000\000" # Uses: 5 00:09:38.312 ###### End of recommended dictionary. ###### 00:09:38.312 Done 57 runs in 2 second(s) 00:09:38.312 13:51:29 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_17.conf 00:09:38.312 13:51:29 -- ../common.sh@72 -- # (( i++ )) 00:09:38.312 13:51:29 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:38.312 13:51:29 -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:09:38.312 13:51:29 -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:09:38.312 13:51:29 -- nvmf/run.sh@24 -- # local timen=1 00:09:38.312 13:51:29 -- nvmf/run.sh@25 -- # local core=0x1 00:09:38.312 13:51:29 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:38.312 13:51:29 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:09:38.312 13:51:29 -- nvmf/run.sh@29 -- # printf %02d 18 00:09:38.312 13:51:29 -- nvmf/run.sh@29 -- # port=4418 00:09:38.312 13:51:29 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:38.312 13:51:29 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:09:38.312 13:51:29 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:38.312 13:51:29 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 -r /var/tmp/spdk18.sock 00:09:38.570 [2024-07-23 13:51:29.340082] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:38.570 [2024-07-23 13:51:29.340157] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916155 ] 00:09:38.570 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.828 [2024-07-23 13:51:29.595079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.828 [2024-07-23 13:51:29.681999] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:38.828 [2024-07-23 13:51:29.682181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.828 [2024-07-23 13:51:29.744645] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.828 [2024-07-23 13:51:29.760859] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:09:38.828 INFO: Running with entropic power schedule (0xFF, 100). 00:09:38.828 INFO: Seed: 1233853708 00:09:38.828 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:38.828 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:38.828 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:09:38.828 INFO: A corpus is not provided, starting from an empty corpus 00:09:38.828 #2 INITED exec/s: 0 rss: 61Mb 00:09:38.828 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:38.828 This may also happen if the target rejected all inputs we tried so far 00:09:38.828 [2024-07-23 13:51:29.826263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:38.828 [2024-07-23 13:51:29.826302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.343 NEW_FUNC[1/670]: 0x49e290 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:09:39.343 NEW_FUNC[2/670]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:39.343 #5 NEW cov: 11559 ft: 11560 corp: 2/26b lim: 100 exec/s: 0 rss: 68Mb L: 25/25 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:09:39.343 [2024-07-23 13:51:30.307527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:39.343 [2024-07-23 13:51:30.307576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.343 #11 NEW cov: 11672 ft: 12124 corp: 3/50b lim: 100 exec/s: 0 rss: 68Mb L: 24/25 MS: 1 CrossOver- 00:09:39.343 [2024-07-23 13:51:30.357558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:39.343 [2024-07-23 13:51:30.357595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.600 #12 NEW cov: 11678 ft: 12245 corp: 4/75b lim: 100 exec/s: 0 rss: 68Mb L: 25/25 MS: 1 ChangeBit- 00:09:39.601 [2024-07-23 13:51:30.417714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:39.601 [2024-07-23 13:51:30.417749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.601 #13 NEW cov: 11763 ft: 12593 corp: 5/100b lim: 100 exec/s: 0 rss: 68Mb L: 25/25 MS: 1 ShuffleBytes- 00:09:39.601 [2024-07-23 13:51:30.468160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:39.601 [2024-07-23 13:51:30.468195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.601 [2024-07-23 13:51:30.468243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:39.601 [2024-07-23 13:51:30.468262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.601 [2024-07-23 13:51:30.468325] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:39.601 [2024-07-23 13:51:30.468344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.601 #14 NEW cov: 11763 ft: 13095 corp: 6/173b lim: 100 exec/s: 0 rss: 69Mb L: 73/73 MS: 1 InsertRepeatedBytes- 00:09:39.601 [2024-07-23 13:51:30.528262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:39.601 [2024-07-23 13:51:30.528296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.601 [2024-07-23 13:51:30.528355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:39.601 [2024-07-23 13:51:30.528376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.601 [2024-07-23 13:51:30.528437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:39.601 [2024-07-23 13:51:30.528454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.601 #15 NEW cov: 11763 ft: 13168 corp: 7/251b lim: 100 exec/s: 0 rss: 69Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:09:39.601 [2024-07-23 13:51:30.588157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:39.601 [2024-07-23 13:51:30.588191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.601 #16 NEW cov: 11763 ft: 13244 corp: 8/280b lim: 100 exec/s: 0 rss: 69Mb L: 29/78 MS: 1 CMP- DE: "\021\000\000\000"- 00:09:39.859 [2024-07-23 13:51:30.638716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:39.859 [2024-07-23 13:51:30.638751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.859 [2024-07-23 13:51:30.638809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:39.859 [2024-07-23 13:51:30.638829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.859 [2024-07-23 13:51:30.638891] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:39.859 [2024-07-23 13:51:30.638909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.859 [2024-07-23 13:51:30.638971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:39.859 [2024-07-23 13:51:30.638990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:39.859 #17 NEW cov: 11763 ft: 13548 corp: 9/362b lim: 100 exec/s: 0 rss: 69Mb L: 82/82 MS: 1 CMP- DE: "Q\000\000\000"- 00:09:39.859 [2024-07-23 13:51:30.698742] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:39.859 [2024-07-23 13:51:30.698776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.859 [2024-07-23 13:51:30.698829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:39.859 [2024-07-23 13:51:30.698850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.859 [2024-07-23 13:51:30.698908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:39.859 [2024-07-23 13:51:30.698927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:39.859 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:39.859 #18 NEW cov: 11786 ft: 13585 corp: 10/435b lim: 100 exec/s: 0 rss: 69Mb L: 73/82 MS: 1 ShuffleBytes- 00:09:39.859 [2024-07-23 13:51:30.758663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:39.859 [2024-07-23 13:51:30.758696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.859 #19 NEW cov: 11786 ft: 13663 corp: 11/460b lim: 100 exec/s: 0 rss: 69Mb L: 25/82 MS: 1 ShuffleBytes- 00:09:39.859 [2024-07-23 13:51:30.808788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:39.859 [2024-07-23 13:51:30.808823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.859 #20 NEW cov: 11786 ft: 13736 corp: 12/485b lim: 100 exec/s: 20 rss: 69Mb L: 25/82 MS: 1 ChangeBit- 00:09:39.859 [2024-07-23 13:51:30.869243] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:39.859 [2024-07-23 13:51:30.869278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:39.859 [2024-07-23 13:51:30.869326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:39.859 [2024-07-23 13:51:30.869345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:39.859 [2024-07-23 13:51:30.869411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:39.859 [2024-07-23 13:51:30.869431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.117 #26 NEW cov: 11786 ft: 13862 corp: 13/562b lim: 100 exec/s: 26 rss: 69Mb L: 77/82 MS: 1 CrossOver- 00:09:40.117 [2024-07-23 13:51:30.919568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.117 [2024-07-23 13:51:30.919602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.117 [2024-07-23 13:51:30.919665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:40.117 [2024-07-23 13:51:30.919685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.117 [2024-07-23 13:51:30.919749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:40.117 [2024-07-23 13:51:30.919769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.117 [2024-07-23 13:51:30.919831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:40.117 [2024-07-23 13:51:30.919849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.117 #30 NEW cov: 11786 ft: 13873 corp: 14/650b lim: 100 exec/s: 30 rss: 69Mb L: 88/88 MS: 4 ChangeBit-ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:09:40.117 [2024-07-23 13:51:30.969649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.117 [2024-07-23 13:51:30.969683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.117 [2024-07-23 13:51:30.969735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:40.117 [2024-07-23 13:51:30.969752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.117 [2024-07-23 13:51:30.969812] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:40.117 [2024-07-23 13:51:30.969831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.117 [2024-07-23 13:51:30.969894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:40.117 [2024-07-23 13:51:30.969914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.117 #31 NEW cov: 11786 ft: 13890 corp: 15/741b lim: 100 exec/s: 31 rss: 69Mb L: 91/91 MS: 1 CrossOver- 00:09:40.117 [2024-07-23 13:51:31.019815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.117 [2024-07-23 13:51:31.019850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.117 [2024-07-23 13:51:31.019902] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:40.117 [2024-07-23 13:51:31.019921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.117 [2024-07-23 13:51:31.019981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:40.117 [2024-07-23 13:51:31.020000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.117 [2024-07-23 13:51:31.020062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:40.117 [2024-07-23 13:51:31.020081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.117 #32 NEW cov: 11786 ft: 13894 corp: 16/839b lim: 100 exec/s: 32 rss: 69Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:09:40.117 [2024-07-23 13:51:31.079613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.117 [2024-07-23 13:51:31.079647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.117 #33 NEW cov: 11786 ft: 13951 corp: 17/864b lim: 100 exec/s: 33 rss: 69Mb L: 25/98 MS: 1 ChangeBinInt- 00:09:40.375 [2024-07-23 13:51:31.139763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.375 [2024-07-23 13:51:31.139796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.375 #34 NEW cov: 11786 ft: 13990 corp: 18/889b lim: 100 exec/s: 34 rss: 69Mb L: 25/98 MS: 1 ChangeBinInt- 00:09:40.375 [2024-07-23 13:51:31.190207] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.375 [2024-07-23 13:51:31.190246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.375 [2024-07-23 13:51:31.190301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:40.375 [2024-07-23 13:51:31.190320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.375 [2024-07-23 13:51:31.190383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:40.375 [2024-07-23 13:51:31.190403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.375 #35 NEW cov: 11786 ft: 13999 corp: 19/968b lim: 100 exec/s: 35 rss: 69Mb L: 79/98 MS: 1 InsertByte- 00:09:40.375 [2024-07-23 13:51:31.240063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.375 [2024-07-23 13:51:31.240096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.375 #36 NEW cov: 11786 ft: 14031 corp: 20/993b lim: 100 exec/s: 36 rss: 69Mb L: 25/98 MS: 1 CopyPart- 00:09:40.375 [2024-07-23 13:51:31.280163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.375 [2024-07-23 13:51:31.280196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.375 #37 NEW cov: 11786 ft: 14101 corp: 21/1018b lim: 100 exec/s: 37 rss: 69Mb L: 25/98 MS: 1 CopyPart- 00:09:40.375 [2024-07-23 13:51:31.330580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.375 [2024-07-23 13:51:31.330614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.375 [2024-07-23 13:51:31.330667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:40.375 [2024-07-23 13:51:31.330687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.375 [2024-07-23 13:51:31.330750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:40.375 [2024-07-23 13:51:31.330770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.375 #43 NEW cov: 11786 ft: 14116 corp: 22/1080b lim: 100 exec/s: 43 rss: 69Mb L: 62/98 MS: 1 InsertRepeatedBytes- 00:09:40.375 [2024-07-23 13:51:31.390539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.375 [2024-07-23 13:51:31.390572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.660 #44 NEW cov: 11786 ft: 14148 corp: 23/1113b lim: 100 exec/s: 44 rss: 69Mb L: 33/98 MS: 1 CMP- DE: "\000\000\177/\324\031\352I"- 00:09:40.660 [2024-07-23 13:51:31.450811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.660 [2024-07-23 13:51:31.450845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.660 [2024-07-23 13:51:31.450890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:40.660 [2024-07-23 13:51:31.450908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.660 #48 NEW cov: 11786 ft: 14387 corp: 24/1170b lim: 100 exec/s: 48 rss: 70Mb L: 57/98 MS: 4 EraseBytes-InsertByte-CMP-InsertRepeatedBytes- DE: "\015\000"- 00:09:40.660 [2024-07-23 13:51:31.500832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.660 [2024-07-23 13:51:31.500865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.660 #49 NEW cov: 11786 ft: 14425 corp: 25/1192b lim: 100 exec/s: 49 rss: 70Mb L: 22/98 MS: 1 EraseBytes- 00:09:40.661 [2024-07-23 13:51:31.551455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.661 [2024-07-23 13:51:31.551489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.661 [2024-07-23 13:51:31.551555] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:40.661 [2024-07-23 13:51:31.551575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.661 [2024-07-23 13:51:31.551638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:40.661 [2024-07-23 13:51:31.551657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.661 [2024-07-23 13:51:31.551719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:40.661 [2024-07-23 13:51:31.551739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.661 [2024-07-23 13:51:31.551804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:4 nsid:0 00:09:40.661 [2024-07-23 13:51:31.551824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:09:40.661 #50 NEW cov: 11786 ft: 14462 corp: 26/1292b lim: 100 exec/s: 50 rss: 70Mb L: 100/100 MS: 1 CrossOver- 00:09:40.661 [2024-07-23 13:51:31.611286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.661 [2024-07-23 13:51:31.611320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.661 [2024-07-23 13:51:31.611373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:40.661 [2024-07-23 13:51:31.611392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.661 #51 NEW cov: 11786 ft: 14472 corp: 27/1337b lim: 100 exec/s: 51 rss: 70Mb L: 45/100 MS: 1 CrossOver- 00:09:40.661 [2024-07-23 13:51:31.671713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.661 [2024-07-23 13:51:31.671748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.661 [2024-07-23 13:51:31.671807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:40.661 [2024-07-23 13:51:31.671827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.661 [2024-07-23 13:51:31.671887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:40.661 [2024-07-23 13:51:31.671906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.661 [2024-07-23 13:51:31.671974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:09:40.661 [2024-07-23 13:51:31.671994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:40.923 #52 NEW cov: 11786 ft: 14483 corp: 28/1435b lim: 100 exec/s: 52 rss: 70Mb L: 98/100 MS: 1 InsertRepeatedBytes- 00:09:40.923 [2024-07-23 13:51:31.731735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.923 [2024-07-23 13:51:31.731769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.923 [2024-07-23 13:51:31.731813] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:40.923 [2024-07-23 13:51:31.731833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.923 [2024-07-23 13:51:31.731894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:09:40.923 [2024-07-23 13:51:31.731915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:40.923 #53 NEW cov: 11786 ft: 14492 corp: 29/1497b lim: 100 exec/s: 53 rss: 70Mb L: 62/100 MS: 1 PersAutoDict- DE: "\015\000"- 00:09:40.923 [2024-07-23 13:51:31.791807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:09:40.923 [2024-07-23 13:51:31.791842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:40.923 [2024-07-23 13:51:31.791900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:09:40.923 [2024-07-23 13:51:31.791921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:40.923 #54 NEW cov: 11786 ft: 14527 corp: 30/1554b lim: 100 exec/s: 27 rss: 70Mb L: 57/100 MS: 1 ShuffleBytes- 00:09:40.923 #54 DONE cov: 11786 ft: 14527 corp: 30/1554b lim: 100 exec/s: 27 rss: 70Mb 00:09:40.923 ###### Recommended dictionary. ###### 00:09:40.923 "\021\000\000\000" # Uses: 2 00:09:40.923 "Q\000\000\000" # Uses: 0 00:09:40.923 "\000\000\177/\324\031\352I" # Uses: 0 00:09:40.923 "\015\000" # Uses: 1 00:09:40.923 ###### End of recommended dictionary. ###### 00:09:40.923 Done 54 runs in 2 second(s) 00:09:41.182 13:51:31 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_18.conf 00:09:41.182 13:51:31 -- ../common.sh@72 -- # (( i++ )) 00:09:41.182 13:51:31 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:41.182 13:51:31 -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:09:41.182 13:51:31 -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:09:41.182 13:51:31 -- nvmf/run.sh@24 -- # local timen=1 00:09:41.182 13:51:31 -- nvmf/run.sh@25 -- # local core=0x1 00:09:41.182 13:51:31 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:41.182 13:51:31 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:09:41.182 13:51:31 -- nvmf/run.sh@29 -- # printf %02d 19 00:09:41.182 13:51:31 -- nvmf/run.sh@29 -- # port=4419 00:09:41.182 13:51:31 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:41.182 13:51:31 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:09:41.182 13:51:31 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:41.182 13:51:32 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 -r /var/tmp/spdk19.sock 00:09:41.182 [2024-07-23 13:51:32.029985] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:41.182 [2024-07-23 13:51:32.030077] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916524 ] 00:09:41.182 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.441 [2024-07-23 13:51:32.290100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.441 [2024-07-23 13:51:32.375492] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:41.441 [2024-07-23 13:51:32.375675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.441 [2024-07-23 13:51:32.438175] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.441 [2024-07-23 13:51:32.454399] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:09:41.699 INFO: Running with entropic power schedule (0xFF, 100). 00:09:41.699 INFO: Seed: 3927853301 00:09:41.699 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:41.699 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:41.699 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:09:41.699 INFO: A corpus is not provided, starting from an empty corpus 00:09:41.699 #2 INITED exec/s: 0 rss: 61Mb 00:09:41.699 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:41.699 This may also happen if the target rejected all inputs we tried so far 00:09:41.699 [2024-07-23 13:51:32.509407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:167772160 len:1 00:09:41.699 [2024-07-23 13:51:32.509456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.265 NEW_FUNC[1/670]: 0x4a1250 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:09:42.265 NEW_FUNC[2/670]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:42.265 #5 NEW cov: 11537 ft: 11538 corp: 2/12b lim: 50 exec/s: 0 rss: 68Mb L: 11/11 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:09:42.265 [2024-07-23 13:51:33.020849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540792295054 len:36495 00:09:42.265 [2024-07-23 13:51:33.020909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.265 [2024-07-23 13:51:33.020961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10272304543006887566 len:36495 00:09:42.265 [2024-07-23 13:51:33.020988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.265 [2024-07-23 13:51:33.021031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304543006887566 len:36495 00:09:42.265 [2024-07-23 13:51:33.021054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.265 #6 NEW cov: 11650 ft: 12352 corp: 3/43b lim: 50 exec/s: 0 rss: 68Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:09:42.265 [2024-07-23 13:51:33.100790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:09:42.265 [2024-07-23 13:51:33.100836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.265 [2024-07-23 13:51:33.100886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:42.265 [2024-07-23 13:51:33.100913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.265 #9 NEW cov: 11656 ft: 12845 corp: 4/68b lim: 50 exec/s: 0 rss: 68Mb L: 25/31 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:09:42.265 [2024-07-23 13:51:33.171111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:0 len:1 00:09:42.265 [2024-07-23 13:51:33.171163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.265 [2024-07-23 13:51:33.171222] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:42.265 [2024-07-23 13:51:33.171249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.265 [2024-07-23 13:51:33.171294] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:16 00:09:42.266 [2024-07-23 13:51:33.171319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.266 #10 NEW cov: 11741 ft: 13133 corp: 5/98b lim: 50 exec/s: 0 rss: 68Mb L: 30/31 MS: 1 CopyPart- 00:09:42.266 [2024-07-23 13:51:33.271389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540792295054 len:36495 00:09:42.266 [2024-07-23 13:51:33.271433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.266 [2024-07-23 13:51:33.271480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10272304543006887566 len:36495 00:09:42.266 [2024-07-23 13:51:33.271506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.266 [2024-07-23 13:51:33.271550] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304543006887566 len:36353 00:09:42.266 [2024-07-23 13:51:33.271574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.266 [2024-07-23 13:51:33.271616] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:0 len:1 00:09:42.266 [2024-07-23 13:51:33.271640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.525 #11 NEW cov: 11741 ft: 13482 corp: 6/143b lim: 50 exec/s: 0 rss: 69Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:09:42.525 [2024-07-23 13:51:33.361699] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540792295054 len:36495 00:09:42.525 [2024-07-23 13:51:33.361741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.525 [2024-07-23 13:51:33.361787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744071813726207 len:65536 00:09:42.525 [2024-07-23 13:51:33.361814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.525 [2024-07-23 13:51:33.361858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304544910118542 len:36495 00:09:42.525 [2024-07-23 13:51:33.361882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.525 [2024-07-23 13:51:33.361924] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10272304543006887566 len:36495 00:09:42.525 [2024-07-23 13:51:33.361948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.525 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:42.525 #17 NEW cov: 11764 ft: 13615 corp: 7/185b lim: 50 exec/s: 0 rss: 69Mb L: 42/45 MS: 1 InsertRepeatedBytes- 00:09:42.525 [2024-07-23 13:51:33.441758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3906369333256140342 len:13879 00:09:42.525 [2024-07-23 13:51:33.441800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.525 [2024-07-23 13:51:33.441854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3906369333256140342 len:13879 00:09:42.525 [2024-07-23 13:51:33.441881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.525 #23 NEW cov: 11764 ft: 13673 corp: 8/211b lim: 50 exec/s: 23 rss: 69Mb L: 26/45 MS: 1 InsertRepeatedBytes- 00:09:42.525 [2024-07-23 13:51:33.511942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540792295054 len:36495 00:09:42.525 [2024-07-23 13:51:33.511983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.525 [2024-07-23 13:51:33.512031] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10272304543006887566 len:36495 00:09:42.525 [2024-07-23 13:51:33.512057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.525 [2024-07-23 13:51:33.512102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304543006887566 len:36495 00:09:42.525 [2024-07-23 13:51:33.512126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.783 #24 NEW cov: 11764 ft: 13705 corp: 9/242b lim: 50 exec/s: 24 rss: 69Mb L: 31/45 MS: 1 ShuffleBytes- 00:09:42.783 [2024-07-23 13:51:33.582264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10242186718284254862 len:36495 00:09:42.783 [2024-07-23 13:51:33.582305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.783 [2024-07-23 13:51:33.582353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744071813726207 len:65536 00:09:42.783 [2024-07-23 13:51:33.582379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.783 [2024-07-23 13:51:33.582423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304544910118542 len:36495 00:09:42.783 [2024-07-23 13:51:33.582447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:42.783 [2024-07-23 13:51:33.582489] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:10272304543006887566 len:36495 00:09:42.783 [2024-07-23 13:51:33.582513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:42.783 #25 NEW cov: 11764 ft: 13762 corp: 10/284b lim: 50 exec/s: 25 rss: 69Mb L: 42/45 MS: 1 ChangeByte- 00:09:42.783 [2024-07-23 13:51:33.672346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3906369333256140342 len:14080 00:09:42.783 [2024-07-23 13:51:33.672389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.783 [2024-07-23 13:51:33.672438] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:3906369336641584950 len:13879 00:09:42.783 [2024-07-23 13:51:33.672465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.783 #26 NEW cov: 11764 ft: 13788 corp: 11/310b lim: 50 exec/s: 26 rss: 69Mb L: 26/45 MS: 1 CMP- DE: "\377\377\377\377"- 00:09:42.783 [2024-07-23 13:51:33.762687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:215 len:1 00:09:42.783 [2024-07-23 13:51:33.762728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:42.783 [2024-07-23 13:51:33.762782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:42.783 [2024-07-23 13:51:33.762809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:42.783 [2024-07-23 13:51:33.762854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:0 len:16 00:09:42.783 [2024-07-23 13:51:33.762878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.041 #27 NEW cov: 11764 ft: 13818 corp: 12/340b lim: 50 exec/s: 27 rss: 69Mb L: 30/45 MS: 1 ChangeByte- 00:09:43.041 [2024-07-23 13:51:33.852871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:1945614645489710646 len:13879 00:09:43.041 [2024-07-23 13:51:33.852913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.041 [2024-07-23 13:51:33.852961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18389945734892879871 len:13879 00:09:43.041 [2024-07-23 13:51:33.852987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.041 #28 NEW cov: 11764 ft: 13875 corp: 13/368b lim: 50 exec/s: 28 rss: 69Mb L: 28/45 MS: 1 CMP- DE: "\033\000"- 00:09:43.041 [2024-07-23 13:51:33.943262] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540792295054 len:36495 00:09:43.041 [2024-07-23 13:51:33.943304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.041 [2024-07-23 13:51:33.943350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10272304543006887566 len:36380 00:09:43.041 [2024-07-23 13:51:33.943377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.041 [2024-07-23 13:51:33.943421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304540624522894 len:36495 00:09:43.041 [2024-07-23 13:51:33.943445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.041 [2024-07-23 13:51:33.943487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2382364672 len:1 00:09:43.041 [2024-07-23 13:51:33.943511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.041 #29 NEW cov: 11764 ft: 13891 corp: 14/415b lim: 50 exec/s: 29 rss: 69Mb L: 47/47 MS: 1 PersAutoDict- DE: "\033\000"- 00:09:43.041 [2024-07-23 13:51:34.043418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540792295054 len:36495 00:09:43.041 [2024-07-23 13:51:34.043460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.042 [2024-07-23 13:51:34.043506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10272304543006887566 len:36495 00:09:43.042 [2024-07-23 13:51:34.043533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.042 [2024-07-23 13:51:34.043578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304543006887431 len:36495 00:09:43.042 [2024-07-23 13:51:34.043602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.300 #30 NEW cov: 11764 ft: 13895 corp: 15/447b lim: 50 exec/s: 30 rss: 69Mb L: 32/47 MS: 1 InsertByte- 00:09:43.300 [2024-07-23 13:51:34.113487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374405004862685184 len:1 00:09:43.300 [2024-07-23 13:51:34.113534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.300 #31 NEW cov: 11764 ft: 13943 corp: 16/462b lim: 50 exec/s: 31 rss: 69Mb L: 15/47 MS: 1 CMP- DE: "\376\377\000\000"- 00:09:43.300 [2024-07-23 13:51:34.203965] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540792295054 len:36495 00:09:43.300 [2024-07-23 13:51:34.204009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.300 [2024-07-23 13:51:34.204057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10272304543006887566 len:36380 00:09:43.300 [2024-07-23 13:51:34.204084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.300 [2024-07-23 13:51:34.204128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304540624522894 len:36495 00:09:43.300 [2024-07-23 13:51:34.204153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.300 [2024-07-23 13:51:34.204195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2382364672 len:1 00:09:43.300 [2024-07-23 13:51:34.204229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.300 #32 NEW cov: 11764 ft: 13966 corp: 17/509b lim: 50 exec/s: 32 rss: 69Mb L: 47/47 MS: 1 ChangeByte- 00:09:43.300 [2024-07-23 13:51:34.294226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540792295054 len:36353 00:09:43.300 [2024-07-23 13:51:34.294270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.300 [2024-07-23 13:51:34.294316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:43.300 [2024-07-23 13:51:34.294343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.300 [2024-07-23 13:51:34.294387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304540624522894 len:36495 00:09:43.300 [2024-07-23 13:51:34.294411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.300 [2024-07-23 13:51:34.294453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2382364672 len:1 00:09:43.300 [2024-07-23 13:51:34.294477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.558 #33 NEW cov: 11764 ft: 14033 corp: 18/556b lim: 50 exec/s: 33 rss: 69Mb L: 47/47 MS: 1 CrossOver- 00:09:43.558 [2024-07-23 13:51:34.364376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446743584260394751 len:36495 00:09:43.558 [2024-07-23 13:51:34.364419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.558 [2024-07-23 13:51:34.364465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:10272304543006887566 len:36380 00:09:43.558 [2024-07-23 13:51:34.364492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.558 [2024-07-23 13:51:34.364536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304540624522894 len:36495 00:09:43.558 [2024-07-23 13:51:34.364561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.558 [2024-07-23 13:51:34.364602] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2382364672 len:1 00:09:43.558 [2024-07-23 13:51:34.364632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.558 #34 NEW cov: 11764 ft: 14115 corp: 19/603b lim: 50 exec/s: 34 rss: 69Mb L: 47/47 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:09:43.558 [2024-07-23 13:51:34.434487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540792295054 len:36353 00:09:43.558 [2024-07-23 13:51:34.434529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.558 [2024-07-23 13:51:34.434578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:0 len:1 00:09:43.558 [2024-07-23 13:51:34.434605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.558 #35 NEW cov: 11764 ft: 14145 corp: 20/627b lim: 50 exec/s: 35 rss: 69Mb L: 24/47 MS: 1 EraseBytes- 00:09:43.558 [2024-07-23 13:51:34.504753] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10272304540792295054 len:36495 00:09:43.558 [2024-07-23 13:51:34.504795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:43.558 [2024-07-23 13:51:34.504842] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374687091948687102 len:36380 00:09:43.558 [2024-07-23 13:51:34.504869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:43.558 [2024-07-23 13:51:34.504913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:10272304540624522894 len:36495 00:09:43.558 [2024-07-23 13:51:34.504937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:43.558 [2024-07-23 13:51:34.504979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:2382364672 len:1 00:09:43.558 [2024-07-23 13:51:34.505003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:43.558 #36 NEW cov: 11764 ft: 14167 corp: 21/674b lim: 50 exec/s: 18 rss: 69Mb L: 47/47 MS: 1 PersAutoDict- DE: "\376\377\000\000"- 00:09:43.558 #36 DONE cov: 11764 ft: 14167 corp: 21/674b lim: 50 exec/s: 18 rss: 69Mb 00:09:43.558 ###### Recommended dictionary. ###### 00:09:43.558 "\377\377\377\377" # Uses: 1 00:09:43.558 "\033\000" # Uses: 1 00:09:43.558 "\376\377\000\000" # Uses: 1 00:09:43.558 ###### End of recommended dictionary. ###### 00:09:43.558 Done 36 runs in 2 second(s) 00:09:43.817 13:51:34 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_19.conf 00:09:43.817 13:51:34 -- ../common.sh@72 -- # (( i++ )) 00:09:43.817 13:51:34 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:43.817 13:51:34 -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:09:43.817 13:51:34 -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:09:43.817 13:51:34 -- nvmf/run.sh@24 -- # local timen=1 00:09:43.817 13:51:34 -- nvmf/run.sh@25 -- # local core=0x1 00:09:43.817 13:51:34 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:43.817 13:51:34 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:09:43.817 13:51:34 -- nvmf/run.sh@29 -- # printf %02d 20 00:09:43.817 13:51:34 -- nvmf/run.sh@29 -- # port=4420 00:09:43.817 13:51:34 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:43.817 13:51:34 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:09:43.817 13:51:34 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:43.817 13:51:34 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 -r /var/tmp/spdk20.sock 00:09:43.817 [2024-07-23 13:51:34.757823] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:43.817 [2024-07-23 13:51:34.757899] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916895 ] 00:09:43.817 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.076 [2024-07-23 13:51:35.015562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.334 [2024-07-23 13:51:35.100942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:44.334 [2024-07-23 13:51:35.101130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.334 [2024-07-23 13:51:35.163593] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.334 [2024-07-23 13:51:35.179811] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:44.334 INFO: Running with entropic power schedule (0xFF, 100). 00:09:44.334 INFO: Seed: 2355883013 00:09:44.334 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:44.334 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:44.334 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:09:44.334 INFO: A corpus is not provided, starting from an empty corpus 00:09:44.334 #2 INITED exec/s: 0 rss: 61Mb 00:09:44.334 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:44.334 This may also happen if the target rejected all inputs we tried so far 00:09:44.334 [2024-07-23 13:51:35.238958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:44.334 [2024-07-23 13:51:35.238998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.334 [2024-07-23 13:51:35.239062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:44.334 [2024-07-23 13:51:35.239085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.900 NEW_FUNC[1/672]: 0x4a2e10 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:09:44.900 NEW_FUNC[2/672]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:44.900 #4 NEW cov: 11594 ft: 11595 corp: 2/50b lim: 90 exec/s: 0 rss: 68Mb L: 49/49 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:44.900 [2024-07-23 13:51:35.732551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:44.900 [2024-07-23 13:51:35.732608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.900 [2024-07-23 13:51:35.732693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:44.900 [2024-07-23 13:51:35.732717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.900 [2024-07-23 13:51:35.732824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:44.900 [2024-07-23 13:51:35.732852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:44.900 #5 NEW cov: 11708 ft: 12700 corp: 3/105b lim: 90 exec/s: 0 rss: 68Mb L: 55/55 MS: 1 InsertRepeatedBytes- 00:09:44.900 [2024-07-23 13:51:35.802066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:44.900 [2024-07-23 13:51:35.802107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.900 [2024-07-23 13:51:35.802191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:44.900 [2024-07-23 13:51:35.802219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.900 #6 NEW cov: 11714 ft: 12883 corp: 4/155b lim: 90 exec/s: 0 rss: 68Mb L: 50/55 MS: 1 CrossOver- 00:09:44.900 [2024-07-23 13:51:35.872276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:44.900 [2024-07-23 13:51:35.872322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:44.900 [2024-07-23 13:51:35.872433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:44.900 [2024-07-23 13:51:35.872459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:44.900 #9 NEW cov: 11799 ft: 13136 corp: 5/205b lim: 90 exec/s: 0 rss: 68Mb L: 50/55 MS: 3 ChangeBinInt-ChangeByte-CrossOver- 00:09:45.158 [2024-07-23 13:51:35.932918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.158 [2024-07-23 13:51:35.932958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.158 [2024-07-23 13:51:35.933052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.158 [2024-07-23 13:51:35.933074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.158 [2024-07-23 13:51:35.933185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:45.158 [2024-07-23 13:51:35.933210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.158 #10 NEW cov: 11799 ft: 13183 corp: 6/272b lim: 90 exec/s: 0 rss: 69Mb L: 67/67 MS: 1 CopyPart- 00:09:45.158 [2024-07-23 13:51:36.003176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.158 [2024-07-23 13:51:36.003218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.158 [2024-07-23 13:51:36.003293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.158 [2024-07-23 13:51:36.003318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.159 [2024-07-23 13:51:36.003403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:45.159 [2024-07-23 13:51:36.003424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.159 #11 NEW cov: 11799 ft: 13244 corp: 7/339b lim: 90 exec/s: 0 rss: 69Mb L: 67/67 MS: 1 ChangeByte- 00:09:45.159 [2024-07-23 13:51:36.073537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.159 [2024-07-23 13:51:36.073577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.159 [2024-07-23 13:51:36.073664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.159 [2024-07-23 13:51:36.073688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.159 [2024-07-23 13:51:36.073791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:45.159 [2024-07-23 13:51:36.073814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.159 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:45.159 #12 NEW cov: 11822 ft: 13289 corp: 8/394b lim: 90 exec/s: 0 rss: 69Mb L: 55/67 MS: 1 ChangeBit- 00:09:45.159 [2024-07-23 13:51:36.143501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.159 [2024-07-23 13:51:36.143541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.159 [2024-07-23 13:51:36.143646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.159 [2024-07-23 13:51:36.143671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.159 #13 NEW cov: 11822 ft: 13326 corp: 9/444b lim: 90 exec/s: 0 rss: 69Mb L: 50/67 MS: 1 ChangeBit- 00:09:45.417 [2024-07-23 13:51:36.204002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.417 [2024-07-23 13:51:36.204042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.417 [2024-07-23 13:51:36.204122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.417 [2024-07-23 13:51:36.204150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.417 [2024-07-23 13:51:36.204258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:45.417 [2024-07-23 13:51:36.204282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.417 #14 NEW cov: 11822 ft: 13358 corp: 10/512b lim: 90 exec/s: 14 rss: 69Mb L: 68/68 MS: 1 InsertByte- 00:09:45.417 [2024-07-23 13:51:36.264057] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.417 [2024-07-23 13:51:36.264095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.417 [2024-07-23 13:51:36.264194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.417 [2024-07-23 13:51:36.264221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.417 #15 NEW cov: 11822 ft: 13424 corp: 11/562b lim: 90 exec/s: 15 rss: 69Mb L: 50/68 MS: 1 ChangeBinInt- 00:09:45.417 [2024-07-23 13:51:36.334310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.417 [2024-07-23 13:51:36.334347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.417 [2024-07-23 13:51:36.334452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.417 [2024-07-23 13:51:36.334478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.417 #16 NEW cov: 11822 ft: 13460 corp: 12/609b lim: 90 exec/s: 16 rss: 69Mb L: 47/68 MS: 1 CrossOver- 00:09:45.417 [2024-07-23 13:51:36.404502] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.417 [2024-07-23 13:51:36.404542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.417 [2024-07-23 13:51:36.404647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.417 [2024-07-23 13:51:36.404672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.417 #22 NEW cov: 11822 ft: 13489 corp: 13/660b lim: 90 exec/s: 22 rss: 69Mb L: 51/68 MS: 1 InsertByte- 00:09:45.675 [2024-07-23 13:51:36.464351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.675 [2024-07-23 13:51:36.464389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.675 #23 NEW cov: 11822 ft: 14285 corp: 14/685b lim: 90 exec/s: 23 rss: 69Mb L: 25/68 MS: 1 EraseBytes- 00:09:45.675 [2024-07-23 13:51:36.535004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.675 [2024-07-23 13:51:36.535046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.675 [2024-07-23 13:51:36.535157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.675 [2024-07-23 13:51:36.535186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.675 #24 NEW cov: 11822 ft: 14320 corp: 15/735b lim: 90 exec/s: 24 rss: 69Mb L: 50/68 MS: 1 CMP- DE: "\275\227\2408\0037-\000"- 00:09:45.675 [2024-07-23 13:51:36.605270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.675 [2024-07-23 13:51:36.605310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.675 [2024-07-23 13:51:36.605419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.675 [2024-07-23 13:51:36.605445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.675 #25 NEW cov: 11822 ft: 14333 corp: 16/785b lim: 90 exec/s: 25 rss: 70Mb L: 50/68 MS: 1 ChangeBit- 00:09:45.675 [2024-07-23 13:51:36.675926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.675 [2024-07-23 13:51:36.675965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.675 [2024-07-23 13:51:36.676072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.675 [2024-07-23 13:51:36.676094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.675 [2024-07-23 13:51:36.676205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:45.675 [2024-07-23 13:51:36.676232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.934 #26 NEW cov: 11822 ft: 14349 corp: 17/840b lim: 90 exec/s: 26 rss: 70Mb L: 55/68 MS: 1 CopyPart- 00:09:45.934 [2024-07-23 13:51:36.736318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.934 [2024-07-23 13:51:36.736357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.934 [2024-07-23 13:51:36.736446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.934 [2024-07-23 13:51:36.736472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.934 [2024-07-23 13:51:36.736575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:45.934 [2024-07-23 13:51:36.736600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.934 #27 NEW cov: 11822 ft: 14391 corp: 18/898b lim: 90 exec/s: 27 rss: 70Mb L: 58/68 MS: 1 PersAutoDict- DE: "\275\227\2408\0037-\000"- 00:09:45.934 [2024-07-23 13:51:36.796012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.934 [2024-07-23 13:51:36.796051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.934 [2024-07-23 13:51:36.796157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.934 [2024-07-23 13:51:36.796180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.934 #28 NEW cov: 11822 ft: 14445 corp: 19/949b lim: 90 exec/s: 28 rss: 70Mb L: 51/68 MS: 1 ChangeBit- 00:09:45.934 [2024-07-23 13:51:36.866827] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.934 [2024-07-23 13:51:36.866865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.934 [2024-07-23 13:51:36.866941] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.934 [2024-07-23 13:51:36.866970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.934 [2024-07-23 13:51:36.867040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:45.934 [2024-07-23 13:51:36.867065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:45.934 #29 NEW cov: 11822 ft: 14460 corp: 20/1006b lim: 90 exec/s: 29 rss: 70Mb L: 57/68 MS: 1 CrossOver- 00:09:45.934 [2024-07-23 13:51:36.927133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:45.934 [2024-07-23 13:51:36.927174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:45.934 [2024-07-23 13:51:36.927277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:45.934 [2024-07-23 13:51:36.927300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:45.934 [2024-07-23 13:51:36.927402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:45.934 [2024-07-23 13:51:36.927429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.192 #30 NEW cov: 11822 ft: 14532 corp: 21/1073b lim: 90 exec/s: 30 rss: 70Mb L: 67/68 MS: 1 ShuffleBytes- 00:09:46.192 [2024-07-23 13:51:36.987369] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:46.192 [2024-07-23 13:51:36.987409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.192 [2024-07-23 13:51:36.987485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:46.192 [2024-07-23 13:51:36.987508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.192 [2024-07-23 13:51:36.987612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:46.192 [2024-07-23 13:51:36.987639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.192 #31 NEW cov: 11822 ft: 14552 corp: 22/1130b lim: 90 exec/s: 31 rss: 70Mb L: 57/68 MS: 1 PersAutoDict- DE: "\275\227\2408\0037-\000"- 00:09:46.192 [2024-07-23 13:51:37.047303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:46.192 [2024-07-23 13:51:37.047341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.192 [2024-07-23 13:51:37.047451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:46.192 [2024-07-23 13:51:37.047474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.192 #32 NEW cov: 11822 ft: 14563 corp: 23/1181b lim: 90 exec/s: 32 rss: 70Mb L: 51/68 MS: 1 ShuffleBytes- 00:09:46.192 [2024-07-23 13:51:37.118024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:46.192 [2024-07-23 13:51:37.118066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.192 [2024-07-23 13:51:37.118179] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:46.192 [2024-07-23 13:51:37.118207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.192 [2024-07-23 13:51:37.118328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:09:46.192 [2024-07-23 13:51:37.118353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:46.192 #33 NEW cov: 11822 ft: 14600 corp: 24/1240b lim: 90 exec/s: 33 rss: 70Mb L: 59/68 MS: 1 InsertByte- 00:09:46.192 [2024-07-23 13:51:37.187920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:09:46.192 [2024-07-23 13:51:37.187956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.192 [2024-07-23 13:51:37.188066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:09:46.192 [2024-07-23 13:51:37.188088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:46.449 #34 NEW cov: 11822 ft: 14688 corp: 25/1279b lim: 90 exec/s: 17 rss: 70Mb L: 39/68 MS: 1 EraseBytes- 00:09:46.449 #34 DONE cov: 11822 ft: 14688 corp: 25/1279b lim: 90 exec/s: 17 rss: 70Mb 00:09:46.449 ###### Recommended dictionary. ###### 00:09:46.449 "\275\227\2408\0037-\000" # Uses: 2 00:09:46.449 ###### End of recommended dictionary. ###### 00:09:46.449 Done 34 runs in 2 second(s) 00:09:46.449 13:51:37 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_20.conf 00:09:46.449 13:51:37 -- ../common.sh@72 -- # (( i++ )) 00:09:46.449 13:51:37 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:46.449 13:51:37 -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:09:46.449 13:51:37 -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:09:46.449 13:51:37 -- nvmf/run.sh@24 -- # local timen=1 00:09:46.449 13:51:37 -- nvmf/run.sh@25 -- # local core=0x1 00:09:46.449 13:51:37 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:46.449 13:51:37 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:09:46.449 13:51:37 -- nvmf/run.sh@29 -- # printf %02d 21 00:09:46.449 13:51:37 -- nvmf/run.sh@29 -- # port=4421 00:09:46.449 13:51:37 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:46.449 13:51:37 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:09:46.449 13:51:37 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:46.449 13:51:37 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 -r /var/tmp/spdk21.sock 00:09:46.449 [2024-07-23 13:51:37.421345] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:46.449 [2024-07-23 13:51:37.421424] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917256 ] 00:09:46.707 EAL: No free 2048 kB hugepages reported on node 1 00:09:46.707 [2024-07-23 13:51:37.678434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.965 [2024-07-23 13:51:37.764636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:46.965 [2024-07-23 13:51:37.764824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.965 [2024-07-23 13:51:37.827379] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.965 [2024-07-23 13:51:37.843610] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:09:46.965 INFO: Running with entropic power schedule (0xFF, 100). 00:09:46.965 INFO: Seed: 725928094 00:09:46.965 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:46.965 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:46.965 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:09:46.965 INFO: A corpus is not provided, starting from an empty corpus 00:09:46.965 #2 INITED exec/s: 0 rss: 61Mb 00:09:46.965 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:46.965 This may also happen if the target rejected all inputs we tried so far 00:09:46.965 [2024-07-23 13:51:37.899284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:46.965 [2024-07-23 13:51:37.899325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:46.965 [2024-07-23 13:51:37.899395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:46.965 [2024-07-23 13:51:37.899417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.530 NEW_FUNC[1/672]: 0x4a6030 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:09:47.530 NEW_FUNC[2/672]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:47.530 #17 NEW cov: 11570 ft: 11571 corp: 2/21b lim: 50 exec/s: 0 rss: 68Mb L: 20/20 MS: 5 InsertByte-CrossOver-CopyPart-EraseBytes-InsertRepeatedBytes- 00:09:47.530 [2024-07-23 13:51:38.370398] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:47.530 [2024-07-23 13:51:38.370446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.530 [2024-07-23 13:51:38.370521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:47.530 [2024-07-23 13:51:38.370545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.530 #18 NEW cov: 11683 ft: 12127 corp: 3/42b lim: 50 exec/s: 0 rss: 68Mb L: 21/21 MS: 1 CrossOver- 00:09:47.530 [2024-07-23 13:51:38.420293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:47.530 [2024-07-23 13:51:38.420331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.530 #19 NEW cov: 11689 ft: 13098 corp: 4/61b lim: 50 exec/s: 0 rss: 69Mb L: 19/21 MS: 1 EraseBytes- 00:09:47.530 [2024-07-23 13:51:38.480680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:47.530 [2024-07-23 13:51:38.480719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.530 [2024-07-23 13:51:38.480771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:47.530 [2024-07-23 13:51:38.480794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.530 #20 NEW cov: 11774 ft: 13329 corp: 5/82b lim: 50 exec/s: 0 rss: 69Mb L: 21/21 MS: 1 CopyPart- 00:09:47.530 [2024-07-23 13:51:38.530636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:47.530 [2024-07-23 13:51:38.530673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.788 #22 NEW cov: 11774 ft: 13419 corp: 6/95b lim: 50 exec/s: 0 rss: 69Mb L: 13/21 MS: 2 InsertByte-CrossOver- 00:09:47.789 [2024-07-23 13:51:38.580887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:47.789 [2024-07-23 13:51:38.580926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.789 [2024-07-23 13:51:38.581014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:47.789 [2024-07-23 13:51:38.581036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.789 #23 NEW cov: 11774 ft: 13588 corp: 7/115b lim: 50 exec/s: 0 rss: 69Mb L: 20/21 MS: 1 ChangeBinInt- 00:09:47.789 [2024-07-23 13:51:38.641108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:47.789 [2024-07-23 13:51:38.641143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.789 [2024-07-23 13:51:38.641190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:47.789 [2024-07-23 13:51:38.641218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.789 #24 NEW cov: 11774 ft: 13646 corp: 8/136b lim: 50 exec/s: 0 rss: 69Mb L: 21/21 MS: 1 ShuffleBytes- 00:09:47.789 [2024-07-23 13:51:38.701095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:47.789 [2024-07-23 13:51:38.701129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.789 #25 NEW cov: 11774 ft: 13704 corp: 9/151b lim: 50 exec/s: 0 rss: 69Mb L: 15/21 MS: 1 EraseBytes- 00:09:47.789 [2024-07-23 13:51:38.751642] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:47.789 [2024-07-23 13:51:38.751679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.789 [2024-07-23 13:51:38.751725] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:47.789 [2024-07-23 13:51:38.751747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:47.789 [2024-07-23 13:51:38.751814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:47.789 [2024-07-23 13:51:38.751834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:47.789 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:47.789 #26 NEW cov: 11797 ft: 14085 corp: 10/183b lim: 50 exec/s: 0 rss: 69Mb L: 32/32 MS: 1 CrossOver- 00:09:47.789 [2024-07-23 13:51:38.801641] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:47.789 [2024-07-23 13:51:38.801676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:47.789 [2024-07-23 13:51:38.801724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:47.789 [2024-07-23 13:51:38.801746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.046 #27 NEW cov: 11797 ft: 14115 corp: 11/204b lim: 50 exec/s: 0 rss: 69Mb L: 21/32 MS: 1 ShuffleBytes- 00:09:48.046 [2024-07-23 13:51:38.841697] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.046 [2024-07-23 13:51:38.841732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.046 [2024-07-23 13:51:38.841796] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:48.046 [2024-07-23 13:51:38.841818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.046 #28 NEW cov: 11797 ft: 14183 corp: 12/225b lim: 50 exec/s: 0 rss: 69Mb L: 21/32 MS: 1 ChangeBit- 00:09:48.046 [2024-07-23 13:51:38.891860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.046 [2024-07-23 13:51:38.891899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.046 [2024-07-23 13:51:38.891955] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:48.046 [2024-07-23 13:51:38.891977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.046 #29 NEW cov: 11797 ft: 14199 corp: 13/246b lim: 50 exec/s: 29 rss: 69Mb L: 21/32 MS: 1 ShuffleBytes- 00:09:48.046 [2024-07-23 13:51:38.951853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.046 [2024-07-23 13:51:38.951889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.046 #30 NEW cov: 11797 ft: 14211 corp: 14/263b lim: 50 exec/s: 30 rss: 69Mb L: 17/32 MS: 1 EraseBytes- 00:09:48.046 [2024-07-23 13:51:39.012158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.046 [2024-07-23 13:51:39.012193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.046 [2024-07-23 13:51:39.012262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:48.046 [2024-07-23 13:51:39.012284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.046 #31 NEW cov: 11797 ft: 14246 corp: 15/284b lim: 50 exec/s: 31 rss: 69Mb L: 21/32 MS: 1 CrossOver- 00:09:48.304 [2024-07-23 13:51:39.072197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.304 [2024-07-23 13:51:39.072238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.304 #32 NEW cov: 11797 ft: 14274 corp: 16/296b lim: 50 exec/s: 32 rss: 69Mb L: 12/32 MS: 1 EraseBytes- 00:09:48.304 [2024-07-23 13:51:39.122329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.304 [2024-07-23 13:51:39.122365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.304 #33 NEW cov: 11797 ft: 14294 corp: 17/309b lim: 50 exec/s: 33 rss: 70Mb L: 13/32 MS: 1 CopyPart- 00:09:48.304 [2024-07-23 13:51:39.182637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.304 [2024-07-23 13:51:39.182673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.304 [2024-07-23 13:51:39.182735] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:48.304 [2024-07-23 13:51:39.182757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.304 #34 NEW cov: 11797 ft: 14304 corp: 18/330b lim: 50 exec/s: 34 rss: 70Mb L: 21/32 MS: 1 ChangeByte- 00:09:48.304 [2024-07-23 13:51:39.242977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.304 [2024-07-23 13:51:39.243013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.304 [2024-07-23 13:51:39.243071] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:48.304 [2024-07-23 13:51:39.243093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.304 [2024-07-23 13:51:39.243161] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:09:48.304 [2024-07-23 13:51:39.243181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:48.304 #35 NEW cov: 11797 ft: 14316 corp: 19/367b lim: 50 exec/s: 35 rss: 70Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:09:48.304 [2024-07-23 13:51:39.302983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.304 [2024-07-23 13:51:39.303020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.304 [2024-07-23 13:51:39.303067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:48.304 [2024-07-23 13:51:39.303089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.562 #36 NEW cov: 11797 ft: 14348 corp: 20/388b lim: 50 exec/s: 36 rss: 70Mb L: 21/37 MS: 1 ChangeByte- 00:09:48.562 [2024-07-23 13:51:39.342949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.562 [2024-07-23 13:51:39.342984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.562 #37 NEW cov: 11797 ft: 14389 corp: 21/401b lim: 50 exec/s: 37 rss: 70Mb L: 13/37 MS: 1 ChangeByte- 00:09:48.562 [2024-07-23 13:51:39.393242] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.562 [2024-07-23 13:51:39.393277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.562 [2024-07-23 13:51:39.393321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:48.562 [2024-07-23 13:51:39.393341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.562 #38 NEW cov: 11797 ft: 14408 corp: 22/422b lim: 50 exec/s: 38 rss: 70Mb L: 21/37 MS: 1 ShuffleBytes- 00:09:48.562 [2024-07-23 13:51:39.433348] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.562 [2024-07-23 13:51:39.433385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.562 [2024-07-23 13:51:39.433429] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:48.562 [2024-07-23 13:51:39.433449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.562 #39 NEW cov: 11797 ft: 14409 corp: 23/442b lim: 50 exec/s: 39 rss: 70Mb L: 20/37 MS: 1 CrossOver- 00:09:48.562 [2024-07-23 13:51:39.483355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.562 [2024-07-23 13:51:39.483390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.562 #40 NEW cov: 11797 ft: 14438 corp: 24/453b lim: 50 exec/s: 40 rss: 70Mb L: 11/37 MS: 1 EraseBytes- 00:09:48.562 [2024-07-23 13:51:39.543500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.562 [2024-07-23 13:51:39.543535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.819 #41 NEW cov: 11797 ft: 14470 corp: 25/471b lim: 50 exec/s: 41 rss: 70Mb L: 18/37 MS: 1 InsertByte- 00:09:48.819 [2024-07-23 13:51:39.603840] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.819 [2024-07-23 13:51:39.603877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.819 [2024-07-23 13:51:39.603924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:48.819 [2024-07-23 13:51:39.603947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.820 #42 NEW cov: 11797 ft: 14495 corp: 26/492b lim: 50 exec/s: 42 rss: 70Mb L: 21/37 MS: 1 ChangeBit- 00:09:48.820 [2024-07-23 13:51:39.664043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.820 [2024-07-23 13:51:39.664084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.820 [2024-07-23 13:51:39.664152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:48.820 [2024-07-23 13:51:39.664176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:48.820 #43 NEW cov: 11797 ft: 14507 corp: 27/513b lim: 50 exec/s: 43 rss: 70Mb L: 21/37 MS: 1 ChangeBit- 00:09:48.820 [2024-07-23 13:51:39.714013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.820 [2024-07-23 13:51:39.714049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.820 #44 NEW cov: 11797 ft: 14509 corp: 28/525b lim: 50 exec/s: 44 rss: 70Mb L: 12/37 MS: 1 EraseBytes- 00:09:48.820 [2024-07-23 13:51:39.764195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.820 [2024-07-23 13:51:39.764238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:48.820 #45 NEW cov: 11797 ft: 14528 corp: 29/541b lim: 50 exec/s: 45 rss: 70Mb L: 16/37 MS: 1 InsertByte- 00:09:48.820 [2024-07-23 13:51:39.824335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:48.820 [2024-07-23 13:51:39.824371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:49.078 #47 NEW cov: 11797 ft: 14550 corp: 30/553b lim: 50 exec/s: 47 rss: 70Mb L: 12/37 MS: 2 EraseBytes-InsertRepeatedBytes- 00:09:49.078 [2024-07-23 13:51:39.874638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:09:49.078 [2024-07-23 13:51:39.874674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:49.078 [2024-07-23 13:51:39.874721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:09:49.078 [2024-07-23 13:51:39.874743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:49.078 #48 NEW cov: 11797 ft: 14558 corp: 31/574b lim: 50 exec/s: 24 rss: 70Mb L: 21/37 MS: 1 ShuffleBytes- 00:09:49.078 #48 DONE cov: 11797 ft: 14558 corp: 31/574b lim: 50 exec/s: 24 rss: 70Mb 00:09:49.078 Done 48 runs in 2 second(s) 00:09:49.078 13:51:40 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_21.conf 00:09:49.078 13:51:40 -- ../common.sh@72 -- # (( i++ )) 00:09:49.078 13:51:40 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:49.078 13:51:40 -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:09:49.078 13:51:40 -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:09:49.078 13:51:40 -- nvmf/run.sh@24 -- # local timen=1 00:09:49.078 13:51:40 -- nvmf/run.sh@25 -- # local core=0x1 00:09:49.078 13:51:40 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:49.078 13:51:40 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:09:49.078 13:51:40 -- nvmf/run.sh@29 -- # printf %02d 22 00:09:49.078 13:51:40 -- nvmf/run.sh@29 -- # port=4422 00:09:49.078 13:51:40 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:49.078 13:51:40 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:09:49.078 13:51:40 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:49.078 13:51:40 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 -r /var/tmp/spdk22.sock 00:09:49.336 [2024-07-23 13:51:40.104743] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:49.336 [2024-07-23 13:51:40.104816] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917626 ] 00:09:49.336 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.594 [2024-07-23 13:51:40.367047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.594 [2024-07-23 13:51:40.452707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:49.594 [2024-07-23 13:51:40.452899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.594 [2024-07-23 13:51:40.515540] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.594 [2024-07-23 13:51:40.531772] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:09:49.594 INFO: Running with entropic power schedule (0xFF, 100). 00:09:49.594 INFO: Seed: 3414931289 00:09:49.594 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:49.594 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:49.594 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:09:49.594 INFO: A corpus is not provided, starting from an empty corpus 00:09:49.594 #2 INITED exec/s: 0 rss: 61Mb 00:09:49.594 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:49.594 This may also happen if the target rejected all inputs we tried so far 00:09:49.594 [2024-07-23 13:51:40.586903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:49.594 [2024-07-23 13:51:40.586951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:49.594 [2024-07-23 13:51:40.587004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:49.594 [2024-07-23 13:51:40.587031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:50.109 NEW_FUNC[1/672]: 0x4a82f0 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:09:50.109 NEW_FUNC[2/672]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:50.109 #7 NEW cov: 11596 ft: 11597 corp: 2/36b lim: 85 exec/s: 0 rss: 68Mb L: 35/35 MS: 5 ChangeBinInt-InsertByte-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:09:50.109 [2024-07-23 13:51:41.098087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:50.109 [2024-07-23 13:51:41.098145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:50.367 #8 NEW cov: 11709 ft: 12751 corp: 3/63b lim: 85 exec/s: 0 rss: 69Mb L: 27/35 MS: 1 EraseBytes- 00:09:50.367 [2024-07-23 13:51:41.198197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:50.367 [2024-07-23 13:51:41.198256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:50.367 #10 NEW cov: 11715 ft: 12976 corp: 4/91b lim: 85 exec/s: 0 rss: 69Mb L: 28/35 MS: 2 ChangeByte-CrossOver- 00:09:50.367 [2024-07-23 13:51:41.278476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:50.367 [2024-07-23 13:51:41.278519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:50.367 [2024-07-23 13:51:41.278569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:50.367 [2024-07-23 13:51:41.278596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:50.367 #11 NEW cov: 11800 ft: 13378 corp: 5/139b lim: 85 exec/s: 0 rss: 69Mb L: 48/48 MS: 1 CopyPart- 00:09:50.367 [2024-07-23 13:51:41.378676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:50.367 [2024-07-23 13:51:41.378718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:50.624 #12 NEW cov: 11800 ft: 13520 corp: 6/172b lim: 85 exec/s: 0 rss: 69Mb L: 33/48 MS: 1 InsertRepeatedBytes- 00:09:50.624 [2024-07-23 13:51:41.458924] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:50.624 [2024-07-23 13:51:41.458965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:50.624 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:50.624 #13 NEW cov: 11817 ft: 13595 corp: 7/199b lim: 85 exec/s: 0 rss: 69Mb L: 27/48 MS: 1 ShuffleBytes- 00:09:50.624 [2024-07-23 13:51:41.559169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:50.624 [2024-07-23 13:51:41.559220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:50.624 #14 NEW cov: 11817 ft: 13638 corp: 8/230b lim: 85 exec/s: 14 rss: 69Mb L: 31/48 MS: 1 InsertRepeatedBytes- 00:09:50.624 [2024-07-23 13:51:41.629452] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:50.624 [2024-07-23 13:51:41.629494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:50.624 [2024-07-23 13:51:41.629546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:50.624 [2024-07-23 13:51:41.629572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:50.881 #15 NEW cov: 11817 ft: 13703 corp: 9/278b lim: 85 exec/s: 15 rss: 69Mb L: 48/48 MS: 1 ChangeBit- 00:09:50.881 [2024-07-23 13:51:41.719613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:50.881 [2024-07-23 13:51:41.719655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:50.881 #21 NEW cov: 11817 ft: 13731 corp: 10/306b lim: 85 exec/s: 21 rss: 69Mb L: 28/48 MS: 1 ChangeBit- 00:09:50.881 [2024-07-23 13:51:41.789852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:50.881 [2024-07-23 13:51:41.789895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:50.881 [2024-07-23 13:51:41.789947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:50.881 [2024-07-23 13:51:41.789974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:50.881 #22 NEW cov: 11817 ft: 13777 corp: 11/351b lim: 85 exec/s: 22 rss: 69Mb L: 45/48 MS: 1 CopyPart- 00:09:50.881 [2024-07-23 13:51:41.869981] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:50.881 [2024-07-23 13:51:41.870022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:51.139 #23 NEW cov: 11817 ft: 13793 corp: 12/378b lim: 85 exec/s: 23 rss: 69Mb L: 27/48 MS: 1 ChangeBinInt- 00:09:51.139 [2024-07-23 13:51:41.970534] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:51.139 [2024-07-23 13:51:41.970574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:51.139 [2024-07-23 13:51:41.970624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:51.139 [2024-07-23 13:51:41.970651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:51.139 [2024-07-23 13:51:41.970703] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:51.139 [2024-07-23 13:51:41.970727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:51.139 #24 NEW cov: 11817 ft: 14172 corp: 13/444b lim: 85 exec/s: 24 rss: 69Mb L: 66/66 MS: 1 CopyPart- 00:09:51.139 [2024-07-23 13:51:42.070571] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:51.139 [2024-07-23 13:51:42.070612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:51.139 #25 NEW cov: 11817 ft: 14185 corp: 14/471b lim: 85 exec/s: 25 rss: 69Mb L: 27/66 MS: 1 CrossOver- 00:09:51.139 [2024-07-23 13:51:42.150830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:51.139 [2024-07-23 13:51:42.150873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:51.396 #26 NEW cov: 11817 ft: 14192 corp: 15/498b lim: 85 exec/s: 26 rss: 70Mb L: 27/66 MS: 1 ShuffleBytes- 00:09:51.396 [2024-07-23 13:51:42.221262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:51.396 [2024-07-23 13:51:42.221303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:51.397 [2024-07-23 13:51:42.221353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:51.397 [2024-07-23 13:51:42.221379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:51.397 [2024-07-23 13:51:42.221426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:09:51.397 [2024-07-23 13:51:42.221450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:51.397 [2024-07-23 13:51:42.221493] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:09:51.397 [2024-07-23 13:51:42.221517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:51.397 #27 NEW cov: 11817 ft: 14536 corp: 16/572b lim: 85 exec/s: 27 rss: 70Mb L: 74/74 MS: 1 InsertRepeatedBytes- 00:09:51.397 [2024-07-23 13:51:42.321313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:51.397 [2024-07-23 13:51:42.321356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:51.397 #28 NEW cov: 11817 ft: 14555 corp: 17/594b lim: 85 exec/s: 28 rss: 70Mb L: 22/74 MS: 1 EraseBytes- 00:09:51.655 [2024-07-23 13:51:42.421682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:51.655 [2024-07-23 13:51:42.421724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:51.655 [2024-07-23 13:51:42.421776] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:09:51.655 [2024-07-23 13:51:42.421802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:51.655 #29 NEW cov: 11824 ft: 14628 corp: 18/642b lim: 85 exec/s: 29 rss: 70Mb L: 48/74 MS: 1 ChangeByte- 00:09:51.655 [2024-07-23 13:51:42.501765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:51.655 [2024-07-23 13:51:42.501807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:51.655 #30 NEW cov: 11824 ft: 14671 corp: 19/669b lim: 85 exec/s: 30 rss: 70Mb L: 27/74 MS: 1 CrossOver- 00:09:51.655 [2024-07-23 13:51:42.571969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:09:51.655 [2024-07-23 13:51:42.572017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:51.655 #31 NEW cov: 11824 ft: 14687 corp: 20/692b lim: 85 exec/s: 15 rss: 70Mb L: 23/74 MS: 1 EraseBytes- 00:09:51.655 #31 DONE cov: 11824 ft: 14687 corp: 20/692b lim: 85 exec/s: 15 rss: 70Mb 00:09:51.655 Done 31 runs in 2 second(s) 00:09:51.914 13:51:42 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_22.conf 00:09:51.914 13:51:42 -- ../common.sh@72 -- # (( i++ )) 00:09:51.914 13:51:42 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:51.914 13:51:42 -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:09:51.914 13:51:42 -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:09:51.914 13:51:42 -- nvmf/run.sh@24 -- # local timen=1 00:09:51.914 13:51:42 -- nvmf/run.sh@25 -- # local core=0x1 00:09:51.914 13:51:42 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:51.914 13:51:42 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:09:51.914 13:51:42 -- nvmf/run.sh@29 -- # printf %02d 23 00:09:51.914 13:51:42 -- nvmf/run.sh@29 -- # port=4423 00:09:51.914 13:51:42 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:51.914 13:51:42 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:09:51.914 13:51:42 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:51.914 13:51:42 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 -r /var/tmp/spdk23.sock 00:09:51.914 [2024-07-23 13:51:42.828297] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:51.914 [2024-07-23 13:51:42.828371] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917987 ] 00:09:51.914 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.172 [2024-07-23 13:51:43.084363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.172 [2024-07-23 13:51:43.169349] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:52.172 [2024-07-23 13:51:43.169538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.430 [2024-07-23 13:51:43.232175] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.430 [2024-07-23 13:51:43.248411] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:09:52.430 INFO: Running with entropic power schedule (0xFF, 100). 00:09:52.430 INFO: Seed: 1836957263 00:09:52.430 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:52.430 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:52.430 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:09:52.430 INFO: A corpus is not provided, starting from an empty corpus 00:09:52.430 #2 INITED exec/s: 0 rss: 61Mb 00:09:52.430 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:52.430 This may also happen if the target rejected all inputs we tried so far 00:09:52.430 [2024-07-23 13:51:43.314312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:52.430 [2024-07-23 13:51:43.314353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:52.430 [2024-07-23 13:51:43.314400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:52.430 [2024-07-23 13:51:43.314422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:52.430 [2024-07-23 13:51:43.314495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:52.430 [2024-07-23 13:51:43.314514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:52.431 [2024-07-23 13:51:43.314583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:52.431 [2024-07-23 13:51:43.314605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:52.996 NEW_FUNC[1/671]: 0x4ab520 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:09:52.996 NEW_FUNC[2/671]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:52.996 #5 NEW cov: 11524 ft: 11523 corp: 2/22b lim: 25 exec/s: 0 rss: 68Mb L: 21/21 MS: 3 CopyPart-ChangeByte-InsertRepeatedBytes- 00:09:52.996 [2024-07-23 13:51:43.785985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:52.996 [2024-07-23 13:51:43.786078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:43.786199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:52.996 [2024-07-23 13:51:43.786269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:43.786354] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:52.996 [2024-07-23 13:51:43.786382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:43.786466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:52.996 [2024-07-23 13:51:43.786495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:52.996 #6 NEW cov: 11642 ft: 12167 corp: 3/43b lim: 25 exec/s: 0 rss: 69Mb L: 21/21 MS: 1 ShuffleBytes- 00:09:52.996 [2024-07-23 13:51:43.845593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:52.996 [2024-07-23 13:51:43.845632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:43.845683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:52.996 [2024-07-23 13:51:43.845703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:43.845773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:52.996 [2024-07-23 13:51:43.845793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:43.845860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:52.996 [2024-07-23 13:51:43.845881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:52.996 #12 NEW cov: 11648 ft: 12496 corp: 4/67b lim: 25 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 CopyPart- 00:09:52.996 [2024-07-23 13:51:43.895623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:52.996 [2024-07-23 13:51:43.895660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:43.895709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:52.996 [2024-07-23 13:51:43.895730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:43.895804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:52.996 [2024-07-23 13:51:43.895827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:52.996 #13 NEW cov: 11733 ft: 13165 corp: 5/86b lim: 25 exec/s: 0 rss: 69Mb L: 19/24 MS: 1 InsertRepeatedBytes- 00:09:52.996 [2024-07-23 13:51:43.945873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:52.996 [2024-07-23 13:51:43.945910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:43.945963] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:52.996 [2024-07-23 13:51:43.945984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:43.946050] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:52.996 [2024-07-23 13:51:43.946071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:43.946135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:52.996 [2024-07-23 13:51:43.946156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:52.996 #14 NEW cov: 11733 ft: 13200 corp: 6/108b lim: 25 exec/s: 0 rss: 69Mb L: 22/24 MS: 1 InsertRepeatedBytes- 00:09:52.996 [2024-07-23 13:51:44.006007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:52.996 [2024-07-23 13:51:44.006044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:44.006096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:52.996 [2024-07-23 13:51:44.006116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:44.006182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:52.996 [2024-07-23 13:51:44.006204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:52.996 [2024-07-23 13:51:44.006278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:52.996 [2024-07-23 13:51:44.006301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:53.254 #15 NEW cov: 11733 ft: 13252 corp: 7/129b lim: 25 exec/s: 0 rss: 69Mb L: 21/24 MS: 1 ChangeBit- 00:09:53.254 [2024-07-23 13:51:44.056472] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.254 [2024-07-23 13:51:44.056508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.056565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.254 [2024-07-23 13:51:44.056587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.056653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.254 [2024-07-23 13:51:44.056674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.056740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:53.254 [2024-07-23 13:51:44.056761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:53.254 #16 NEW cov: 11733 ft: 13371 corp: 8/150b lim: 25 exec/s: 0 rss: 69Mb L: 21/24 MS: 1 ChangeBinInt- 00:09:53.254 [2024-07-23 13:51:44.116322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.254 [2024-07-23 13:51:44.116358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.116418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.254 [2024-07-23 13:51:44.116438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.116503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.254 [2024-07-23 13:51:44.116525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.116591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:53.254 [2024-07-23 13:51:44.116612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:53.254 #17 NEW cov: 11733 ft: 13382 corp: 9/172b lim: 25 exec/s: 0 rss: 69Mb L: 22/24 MS: 1 ChangeByte- 00:09:53.254 [2024-07-23 13:51:44.176364] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.254 [2024-07-23 13:51:44.176400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.176446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.254 [2024-07-23 13:51:44.176467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.176531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.254 [2024-07-23 13:51:44.176551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.254 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:53.254 #21 NEW cov: 11756 ft: 13461 corp: 10/189b lim: 25 exec/s: 0 rss: 69Mb L: 17/24 MS: 4 CopyPart-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:09:53.254 [2024-07-23 13:51:44.226675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.254 [2024-07-23 13:51:44.226711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.226774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.254 [2024-07-23 13:51:44.226796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.226862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.254 [2024-07-23 13:51:44.226882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.226949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:53.254 [2024-07-23 13:51:44.226970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:53.254 #22 NEW cov: 11756 ft: 13544 corp: 11/210b lim: 25 exec/s: 0 rss: 69Mb L: 21/24 MS: 1 ChangeBinInt- 00:09:53.254 [2024-07-23 13:51:44.266511] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.254 [2024-07-23 13:51:44.266546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.254 [2024-07-23 13:51:44.266599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.254 [2024-07-23 13:51:44.266621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.512 #23 NEW cov: 11756 ft: 13877 corp: 12/224b lim: 25 exec/s: 23 rss: 69Mb L: 14/24 MS: 1 EraseBytes- 00:09:53.512 [2024-07-23 13:51:44.326611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.512 [2024-07-23 13:51:44.326646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.512 [2024-07-23 13:51:44.326693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.512 [2024-07-23 13:51:44.326716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.512 #24 NEW cov: 11756 ft: 13916 corp: 13/238b lim: 25 exec/s: 24 rss: 69Mb L: 14/24 MS: 1 EraseBytes- 00:09:53.512 [2024-07-23 13:51:44.387131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.512 [2024-07-23 13:51:44.387166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.512 [2024-07-23 13:51:44.387233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.512 [2024-07-23 13:51:44.387256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.512 [2024-07-23 13:51:44.387322] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.512 [2024-07-23 13:51:44.387342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.512 [2024-07-23 13:51:44.387409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:53.512 [2024-07-23 13:51:44.387430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:53.512 #25 NEW cov: 11756 ft: 13963 corp: 14/260b lim: 25 exec/s: 25 rss: 69Mb L: 22/24 MS: 1 ChangeBinInt- 00:09:53.512 [2024-07-23 13:51:44.437196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.512 [2024-07-23 13:51:44.437236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.512 [2024-07-23 13:51:44.437300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.512 [2024-07-23 13:51:44.437323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.512 [2024-07-23 13:51:44.437389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.512 [2024-07-23 13:51:44.437408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.512 [2024-07-23 13:51:44.437476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:53.512 [2024-07-23 13:51:44.437497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:53.512 #26 NEW cov: 11756 ft: 13989 corp: 15/282b lim: 25 exec/s: 26 rss: 69Mb L: 22/24 MS: 1 ShuffleBytes- 00:09:53.512 [2024-07-23 13:51:44.487368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.512 [2024-07-23 13:51:44.487403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.512 [2024-07-23 13:51:44.487466] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.512 [2024-07-23 13:51:44.487492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.512 [2024-07-23 13:51:44.487557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.512 [2024-07-23 13:51:44.487578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.513 [2024-07-23 13:51:44.487647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:53.513 [2024-07-23 13:51:44.487668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:53.513 #27 NEW cov: 11756 ft: 14023 corp: 16/304b lim: 25 exec/s: 27 rss: 69Mb L: 22/24 MS: 1 ChangeBinInt- 00:09:53.771 [2024-07-23 13:51:44.537567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.771 [2024-07-23 13:51:44.537602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.537657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.771 [2024-07-23 13:51:44.537678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.537745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.771 [2024-07-23 13:51:44.537766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.537835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:53.771 [2024-07-23 13:51:44.537855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:53.771 #28 NEW cov: 11756 ft: 14031 corp: 17/328b lim: 25 exec/s: 28 rss: 70Mb L: 24/24 MS: 1 CopyPart- 00:09:53.771 [2024-07-23 13:51:44.597723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.771 [2024-07-23 13:51:44.597759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.597817] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.771 [2024-07-23 13:51:44.597838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.597903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.771 [2024-07-23 13:51:44.597924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.597991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:53.771 [2024-07-23 13:51:44.598011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:53.771 #29 NEW cov: 11756 ft: 14063 corp: 18/349b lim: 25 exec/s: 29 rss: 70Mb L: 21/24 MS: 1 ChangeBinInt- 00:09:53.771 [2024-07-23 13:51:44.657907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.771 [2024-07-23 13:51:44.657943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.658002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.771 [2024-07-23 13:51:44.658024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.658094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.771 [2024-07-23 13:51:44.658114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.658182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:53.771 [2024-07-23 13:51:44.658204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:53.771 #30 NEW cov: 11756 ft: 14075 corp: 19/372b lim: 25 exec/s: 30 rss: 70Mb L: 23/24 MS: 1 CrossOver- 00:09:53.771 [2024-07-23 13:51:44.718098] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.771 [2024-07-23 13:51:44.718133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.718196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.771 [2024-07-23 13:51:44.718223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.718287] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.771 [2024-07-23 13:51:44.718308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.718372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:53.771 [2024-07-23 13:51:44.718393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:53.771 #31 NEW cov: 11756 ft: 14188 corp: 20/393b lim: 25 exec/s: 31 rss: 70Mb L: 21/24 MS: 1 ShuffleBytes- 00:09:53.771 [2024-07-23 13:51:44.778272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:53.771 [2024-07-23 13:51:44.778307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.778368] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:53.771 [2024-07-23 13:51:44.778390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.778458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:53.771 [2024-07-23 13:51:44.778478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:53.771 [2024-07-23 13:51:44.778548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:53.771 [2024-07-23 13:51:44.778570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:54.029 #32 NEW cov: 11756 ft: 14252 corp: 21/415b lim: 25 exec/s: 32 rss: 70Mb L: 22/24 MS: 1 InsertByte- 00:09:54.029 [2024-07-23 13:51:44.828382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:54.029 [2024-07-23 13:51:44.828417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:44.828476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:54.029 [2024-07-23 13:51:44.828498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:44.828565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:54.029 [2024-07-23 13:51:44.828586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:44.828658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:54.029 [2024-07-23 13:51:44.828678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:54.029 #33 NEW cov: 11756 ft: 14265 corp: 22/437b lim: 25 exec/s: 33 rss: 70Mb L: 22/24 MS: 1 ShuffleBytes- 00:09:54.029 [2024-07-23 13:51:44.888624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:54.029 [2024-07-23 13:51:44.888661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:44.888722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:54.029 [2024-07-23 13:51:44.888743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:44.888809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:54.029 [2024-07-23 13:51:44.888829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:44.888894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:54.029 [2024-07-23 13:51:44.888916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:54.029 #34 NEW cov: 11756 ft: 14291 corp: 23/460b lim: 25 exec/s: 34 rss: 70Mb L: 23/24 MS: 1 CrossOver- 00:09:54.029 [2024-07-23 13:51:44.948760] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:54.029 [2024-07-23 13:51:44.948796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:44.948857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:54.029 [2024-07-23 13:51:44.948879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:44.948948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:54.029 [2024-07-23 13:51:44.948969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:44.949037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:54.029 [2024-07-23 13:51:44.949057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:54.029 #35 NEW cov: 11756 ft: 14297 corp: 24/484b lim: 25 exec/s: 35 rss: 70Mb L: 24/24 MS: 1 CopyPart- 00:09:54.029 [2024-07-23 13:51:45.008960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:54.029 [2024-07-23 13:51:45.008996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:45.009055] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:54.029 [2024-07-23 13:51:45.009076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:45.009144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:54.029 [2024-07-23 13:51:45.009165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:54.029 [2024-07-23 13:51:45.009233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:54.029 [2024-07-23 13:51:45.009254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:54.029 #36 NEW cov: 11756 ft: 14310 corp: 25/505b lim: 25 exec/s: 36 rss: 70Mb L: 21/24 MS: 1 CopyPart- 00:09:54.288 [2024-07-23 13:51:45.059064] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:54.288 [2024-07-23 13:51:45.059100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:54.288 [2024-07-23 13:51:45.059159] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:54.288 [2024-07-23 13:51:45.059180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:54.288 [2024-07-23 13:51:45.059249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:54.288 [2024-07-23 13:51:45.059269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:54.288 [2024-07-23 13:51:45.059334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:54.288 [2024-07-23 13:51:45.059355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:54.289 #37 NEW cov: 11756 ft: 14343 corp: 26/527b lim: 25 exec/s: 37 rss: 70Mb L: 22/24 MS: 1 ChangeByte- 00:09:54.289 [2024-07-23 13:51:45.119320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:54.289 [2024-07-23 13:51:45.119356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.119409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:54.289 [2024-07-23 13:51:45.119430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.119497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:54.289 [2024-07-23 13:51:45.119518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.119582] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:54.289 [2024-07-23 13:51:45.119603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:54.289 #38 NEW cov: 11756 ft: 14396 corp: 27/550b lim: 25 exec/s: 38 rss: 70Mb L: 23/24 MS: 1 InsertRepeatedBytes- 00:09:54.289 [2024-07-23 13:51:45.169409] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:54.289 [2024-07-23 13:51:45.169444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.169504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:54.289 [2024-07-23 13:51:45.169526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.169592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:54.289 [2024-07-23 13:51:45.169612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.169680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:54.289 [2024-07-23 13:51:45.169702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:54.289 #39 NEW cov: 11756 ft: 14407 corp: 28/572b lim: 25 exec/s: 39 rss: 70Mb L: 22/24 MS: 1 ChangeBinInt- 00:09:54.289 [2024-07-23 13:51:45.229606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:54.289 [2024-07-23 13:51:45.229646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.229694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:54.289 [2024-07-23 13:51:45.229716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.229784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:54.289 [2024-07-23 13:51:45.229805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.229872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:54.289 [2024-07-23 13:51:45.229891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:54.289 #40 NEW cov: 11756 ft: 14422 corp: 29/594b lim: 25 exec/s: 40 rss: 70Mb L: 22/24 MS: 1 ChangeByte- 00:09:54.289 [2024-07-23 13:51:45.279667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:09:54.289 [2024-07-23 13:51:45.279702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.279762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:09:54.289 [2024-07-23 13:51:45.279784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.279851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:09:54.289 [2024-07-23 13:51:45.279872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:54.289 [2024-07-23 13:51:45.279937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:09:54.289 [2024-07-23 13:51:45.279958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:54.576 #41 NEW cov: 11756 ft: 14426 corp: 30/617b lim: 25 exec/s: 20 rss: 71Mb L: 23/24 MS: 1 ChangeBinInt- 00:09:54.576 #41 DONE cov: 11756 ft: 14426 corp: 30/617b lim: 25 exec/s: 20 rss: 71Mb 00:09:54.576 Done 41 runs in 2 second(s) 00:09:54.576 13:51:45 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_23.conf 00:09:54.576 13:51:45 -- ../common.sh@72 -- # (( i++ )) 00:09:54.576 13:51:45 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:54.576 13:51:45 -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:09:54.576 13:51:45 -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:09:54.576 13:51:45 -- nvmf/run.sh@24 -- # local timen=1 00:09:54.576 13:51:45 -- nvmf/run.sh@25 -- # local core=0x1 00:09:54.576 13:51:45 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:54.576 13:51:45 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:09:54.576 13:51:45 -- nvmf/run.sh@29 -- # printf %02d 24 00:09:54.576 13:51:45 -- nvmf/run.sh@29 -- # port=4424 00:09:54.576 13:51:45 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:54.576 13:51:45 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:09:54.576 13:51:45 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:54.576 13:51:45 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 -r /var/tmp/spdk24.sock 00:09:54.576 [2024-07-23 13:51:45.510848] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:54.576 [2024-07-23 13:51:45.510946] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918355 ] 00:09:54.576 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.847 [2024-07-23 13:51:45.768940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.847 [2024-07-23 13:51:45.854342] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:54.847 [2024-07-23 13:51:45.854529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.105 [2024-07-23 13:51:45.917116] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.105 [2024-07-23 13:51:45.933347] tcp.c: 952:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:09:55.105 INFO: Running with entropic power schedule (0xFF, 100). 00:09:55.105 INFO: Seed: 225994649 00:09:55.105 INFO: Loaded 1 modules (341341 inline 8-bit counters): 341341 [0x280a94c, 0x285dea9), 00:09:55.105 INFO: Loaded 1 PC tables (341341 PCs): 341341 [0x285deb0,0x2d93480), 00:09:55.105 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:09:55.105 INFO: A corpus is not provided, starting from an empty corpus 00:09:55.105 #2 INITED exec/s: 0 rss: 61Mb 00:09:55.105 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:55.105 This may also happen if the target rejected all inputs we tried so far 00:09:55.105 [2024-07-23 13:51:45.989252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.105 [2024-07-23 13:51:45.989292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:55.105 [2024-07-23 13:51:45.989341] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.105 [2024-07-23 13:51:45.989365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:55.105 [2024-07-23 13:51:45.989435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.105 [2024-07-23 13:51:45.989460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:55.670 NEW_FUNC[1/672]: 0x4ac600 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:09:55.670 NEW_FUNC[2/672]: 0x4bd260 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:55.670 #14 NEW cov: 11597 ft: 11585 corp: 2/66b lim: 100 exec/s: 0 rss: 68Mb L: 65/65 MS: 2 ChangeByte-InsertRepeatedBytes- 00:09:55.670 [2024-07-23 13:51:46.459975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.670 [2024-07-23 13:51:46.460033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:55.670 #15 NEW cov: 11714 ft: 12985 corp: 3/103b lim: 100 exec/s: 0 rss: 68Mb L: 37/65 MS: 1 EraseBytes- 00:09:55.670 [2024-07-23 13:51:46.520321] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.670 [2024-07-23 13:51:46.520365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:55.670 [2024-07-23 13:51:46.520431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.670 [2024-07-23 13:51:46.520453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:55.670 [2024-07-23 13:51:46.520515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.670 [2024-07-23 13:51:46.520541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:55.670 #16 NEW cov: 11720 ft: 13190 corp: 4/168b lim: 100 exec/s: 0 rss: 69Mb L: 65/65 MS: 1 ShuffleBytes- 00:09:55.670 [2024-07-23 13:51:46.570111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.670 [2024-07-23 13:51:46.570148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:55.670 #17 NEW cov: 11805 ft: 13394 corp: 5/205b lim: 100 exec/s: 0 rss: 69Mb L: 37/65 MS: 1 ChangeBit- 00:09:55.670 [2024-07-23 13:51:46.630439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.670 [2024-07-23 13:51:46.630477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:55.670 [2024-07-23 13:51:46.630537] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.670 [2024-07-23 13:51:46.630559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:55.670 #18 NEW cov: 11805 ft: 13794 corp: 6/258b lim: 100 exec/s: 0 rss: 69Mb L: 53/65 MS: 1 EraseBytes- 00:09:55.670 [2024-07-23 13:51:46.690776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.670 [2024-07-23 13:51:46.690812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:55.670 [2024-07-23 13:51:46.690854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.670 [2024-07-23 13:51:46.690876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:55.670 [2024-07-23 13:51:46.690940] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.670 [2024-07-23 13:51:46.690959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:55.928 #19 NEW cov: 11805 ft: 13894 corp: 7/333b lim: 100 exec/s: 0 rss: 69Mb L: 75/75 MS: 1 InsertRepeatedBytes- 00:09:55.928 [2024-07-23 13:51:46.740949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.740984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:55.928 [2024-07-23 13:51:46.741029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.741051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:55.928 [2024-07-23 13:51:46.741113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.741133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:55.928 #20 NEW cov: 11805 ft: 13949 corp: 8/398b lim: 100 exec/s: 0 rss: 69Mb L: 65/75 MS: 1 ChangeByte- 00:09:55.928 [2024-07-23 13:51:46.791065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3801088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.791101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:55.928 [2024-07-23 13:51:46.791148] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.791174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:55.928 [2024-07-23 13:51:46.791247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.791269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:55.928 #21 NEW cov: 11805 ft: 13978 corp: 9/463b lim: 100 exec/s: 0 rss: 69Mb L: 65/75 MS: 1 ChangeByte- 00:09:55.928 [2024-07-23 13:51:46.841223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294967296 len:3585 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.841259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:55.928 [2024-07-23 13:51:46.841304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.841326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:55.928 [2024-07-23 13:51:46.841389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.841409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:55.928 NEW_FUNC[1/1]: 0x195e300 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:55.928 #22 NEW cov: 11828 ft: 14055 corp: 10/528b lim: 100 exec/s: 0 rss: 69Mb L: 65/75 MS: 1 CMP- DE: "\001\000\000\016"- 00:09:55.928 [2024-07-23 13:51:46.891325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3801088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.891362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:55.928 [2024-07-23 13:51:46.891410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.891431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:55.928 [2024-07-23 13:51:46.891493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.928 [2024-07-23 13:51:46.891514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:55.928 #23 NEW cov: 11828 ft: 14178 corp: 11/593b lim: 100 exec/s: 0 rss: 69Mb L: 65/75 MS: 1 ChangeBinInt- 00:09:56.186 [2024-07-23 13:51:46.951567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3801088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:46.951604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.186 [2024-07-23 13:51:46.951646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:46.951669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.186 [2024-07-23 13:51:46.951731] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:46.951752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.186 #24 NEW cov: 11828 ft: 14210 corp: 12/658b lim: 100 exec/s: 24 rss: 69Mb L: 65/75 MS: 1 ChangeByte- 00:09:56.186 [2024-07-23 13:51:47.011677] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:47.011713] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.186 [2024-07-23 13:51:47.011758] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:47.011779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.186 [2024-07-23 13:51:47.011840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:47.011862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.186 #25 NEW cov: 11828 ft: 14255 corp: 13/724b lim: 100 exec/s: 25 rss: 69Mb L: 66/75 MS: 1 InsertByte- 00:09:56.186 [2024-07-23 13:51:47.051475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:47.051511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.186 #26 NEW cov: 11828 ft: 14337 corp: 14/761b lim: 100 exec/s: 26 rss: 69Mb L: 37/75 MS: 1 ChangeBit- 00:09:56.186 [2024-07-23 13:51:47.111993] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:47.112029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.186 [2024-07-23 13:51:47.112083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:47.112103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.186 [2024-07-23 13:51:47.112165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:47.112185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.186 #27 NEW cov: 11828 ft: 14366 corp: 15/823b lim: 100 exec/s: 27 rss: 70Mb L: 62/75 MS: 1 EraseBytes- 00:09:56.186 [2024-07-23 13:51:47.172306] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:35 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:47.172342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.186 [2024-07-23 13:51:47.172390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:47.172412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.186 [2024-07-23 13:51:47.172475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:47.172495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.186 [2024-07-23 13:51:47.172557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.186 [2024-07-23 13:51:47.172579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:56.186 #28 NEW cov: 11828 ft: 14749 corp: 16/904b lim: 100 exec/s: 28 rss: 70Mb L: 81/81 MS: 1 InsertRepeatedBytes- 00:09:56.443 [2024-07-23 13:51:47.222124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3801088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.222164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.443 [2024-07-23 13:51:47.222235] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.222254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.443 #29 NEW cov: 11828 ft: 14759 corp: 17/946b lim: 100 exec/s: 29 rss: 70Mb L: 42/81 MS: 1 EraseBytes- 00:09:56.443 [2024-07-23 13:51:47.272088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:13238251626465572791 len:47032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.272125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.443 #32 NEW cov: 11828 ft: 14778 corp: 18/971b lim: 100 exec/s: 32 rss: 70Mb L: 25/81 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:09:56.443 [2024-07-23 13:51:47.322723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:35 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.322759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.443 [2024-07-23 13:51:47.322810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.322832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.443 [2024-07-23 13:51:47.322893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.322914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.443 [2024-07-23 13:51:47.322976] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.322994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:56.443 #33 NEW cov: 11828 ft: 14817 corp: 19/1052b lim: 100 exec/s: 33 rss: 70Mb L: 81/81 MS: 1 ChangeByte- 00:09:56.443 [2024-07-23 13:51:47.382574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3801088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.382610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.443 [2024-07-23 13:51:47.382654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.382675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.443 #34 NEW cov: 11828 ft: 14844 corp: 20/1094b lim: 100 exec/s: 34 rss: 70Mb L: 42/81 MS: 1 ShuffleBytes- 00:09:56.443 [2024-07-23 13:51:47.442932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:184687394816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.442968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.443 [2024-07-23 13:51:47.443011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.443032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.443 [2024-07-23 13:51:47.443093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.443 [2024-07-23 13:51:47.443118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.702 #35 NEW cov: 11828 ft: 14863 corp: 21/1159b lim: 100 exec/s: 35 rss: 70Mb L: 65/81 MS: 1 ChangeByte- 00:09:56.702 [2024-07-23 13:51:47.503097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3801088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.503133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.702 [2024-07-23 13:51:47.503179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.503200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.702 [2024-07-23 13:51:47.503271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.503291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.702 #36 NEW cov: 11828 ft: 14885 corp: 22/1224b lim: 100 exec/s: 36 rss: 70Mb L: 65/81 MS: 1 CrossOver- 00:09:56.702 [2024-07-23 13:51:47.553417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.553453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.702 [2024-07-23 13:51:47.553515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.553536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.702 [2024-07-23 13:51:47.553596] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.553617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.702 [2024-07-23 13:51:47.553678] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.553700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:56.702 #47 NEW cov: 11828 ft: 14916 corp: 23/1320b lim: 100 exec/s: 47 rss: 70Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:09:56.702 [2024-07-23 13:51:47.613376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.613412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.702 [2024-07-23 13:51:47.613459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.613481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.702 [2024-07-23 13:51:47.613542] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15636497906230362112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.613564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.702 #48 NEW cov: 11828 ft: 14931 corp: 24/1385b lim: 100 exec/s: 48 rss: 70Mb L: 65/96 MS: 1 CrossOver- 00:09:56.702 [2024-07-23 13:51:47.673238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.673277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.702 #49 NEW cov: 11828 ft: 14940 corp: 25/1422b lim: 100 exec/s: 49 rss: 70Mb L: 37/96 MS: 1 ShuffleBytes- 00:09:56.702 [2024-07-23 13:51:47.723569] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3801088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.723606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.702 [2024-07-23 13:51:47.723648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.702 [2024-07-23 13:51:47.723670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.961 #50 NEW cov: 11828 ft: 14959 corp: 26/1481b lim: 100 exec/s: 50 rss: 70Mb L: 59/96 MS: 1 EraseBytes- 00:09:56.961 [2024-07-23 13:51:47.773894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.773930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.961 [2024-07-23 13:51:47.773977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.773999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.961 [2024-07-23 13:51:47.774061] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.774082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.961 #51 NEW cov: 11828 ft: 14965 corp: 27/1553b lim: 100 exec/s: 51 rss: 70Mb L: 72/96 MS: 1 CrossOver- 00:09:56.961 [2024-07-23 13:51:47.824223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.824260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.961 [2024-07-23 13:51:47.824322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:8608480567731124087 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.824344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.961 [2024-07-23 13:51:47.824404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:8608480567727519607 len:30584 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.824425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.961 [2024-07-23 13:51:47.824486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.824507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:09:56.961 #52 NEW cov: 11828 ft: 14971 corp: 28/1650b lim: 100 exec/s: 52 rss: 70Mb L: 97/97 MS: 1 InsertByte- 00:09:56.961 [2024-07-23 13:51:47.884204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.884247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.961 [2024-07-23 13:51:47.884291] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16252928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.884317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.961 [2024-07-23 13:51:47.884379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:15636497906230362112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.884399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.961 #53 NEW cov: 11828 ft: 14980 corp: 29/1715b lim: 100 exec/s: 53 rss: 71Mb L: 65/97 MS: 1 ChangeBinInt- 00:09:56.961 [2024-07-23 13:51:47.944419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.944456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:09:56.961 [2024-07-23 13:51:47.944504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.944525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:09:56.961 [2024-07-23 13:51:47.944589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:932007903232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:56.961 [2024-07-23 13:51:47.944611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:09:56.961 #54 NEW cov: 11828 ft: 14986 corp: 30/1780b lim: 100 exec/s: 27 rss: 71Mb L: 65/97 MS: 1 CopyPart- 00:09:56.961 #54 DONE cov: 11828 ft: 14986 corp: 30/1780b lim: 100 exec/s: 27 rss: 71Mb 00:09:56.961 ###### Recommended dictionary. ###### 00:09:56.961 "\001\000\000\016" # Uses: 0 00:09:56.961 ###### End of recommended dictionary. ###### 00:09:56.961 Done 54 runs in 2 second(s) 00:09:57.219 13:51:48 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_24.conf 00:09:57.219 13:51:48 -- ../common.sh@72 -- # (( i++ )) 00:09:57.219 13:51:48 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:57.219 13:51:48 -- nvmf/run.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:09:57.219 00:09:57.219 real 1m9.222s 00:09:57.219 user 1m39.282s 00:09:57.219 sys 0m10.476s 00:09:57.219 13:51:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.219 13:51:48 -- common/autotest_common.sh@10 -- # set +x 00:09:57.219 ************************************ 00:09:57.219 END TEST nvmf_fuzz 00:09:57.219 ************************************ 00:09:57.219 13:51:48 -- fuzz/llvm.sh@60 -- # for fuzzer in "${fuzzers[@]}" 00:09:57.219 13:51:48 -- fuzz/llvm.sh@61 -- # case "$fuzzer" in 00:09:57.219 13:51:48 -- fuzz/llvm.sh@63 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:57.219 13:51:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:57.219 13:51:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:57.219 13:51:48 -- common/autotest_common.sh@10 -- # set +x 00:09:57.219 ************************************ 00:09:57.219 START TEST vfio_fuzz 00:09:57.219 ************************************ 00:09:57.219 13:51:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:09:57.479 * Looking for test storage... 00:09:57.479 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:57.479 13:51:48 -- vfio/run.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:09:57.479 13:51:48 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:09:57.479 13:51:48 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:57.479 13:51:48 -- common/autotest_common.sh@34 -- # set -e 00:09:57.479 13:51:48 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:57.479 13:51:48 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:57.479 13:51:48 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:57.479 13:51:48 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:09:57.479 13:51:48 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:57.479 13:51:48 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:57.479 13:51:48 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:57.479 13:51:48 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:57.479 13:51:48 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:57.479 13:51:48 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:57.479 13:51:48 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:57.479 13:51:48 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:57.479 13:51:48 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:57.479 13:51:48 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:57.479 13:51:48 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:57.479 13:51:48 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:57.479 13:51:48 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:57.479 13:51:48 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:57.479 13:51:48 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:57.479 13:51:48 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:57.479 13:51:48 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:57.479 13:51:48 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:57.479 13:51:48 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:57.479 13:51:48 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:57.479 13:51:48 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:57.479 13:51:48 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:57.479 13:51:48 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:57.479 13:51:48 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:57.479 13:51:48 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:57.479 13:51:48 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:57.479 13:51:48 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:57.479 13:51:48 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:57.479 13:51:48 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:57.479 13:51:48 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:57.479 13:51:48 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:57.479 13:51:48 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:57.479 13:51:48 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:57.479 13:51:48 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:57.479 13:51:48 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:09:57.479 13:51:48 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:57.479 13:51:48 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:57.479 13:51:48 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:57.479 13:51:48 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:57.479 13:51:48 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:57.479 13:51:48 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:57.479 13:51:48 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:57.479 13:51:48 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:57.479 13:51:48 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:57.479 13:51:48 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:57.479 13:51:48 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:09:57.479 13:51:48 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:09:57.479 13:51:48 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:57.479 13:51:48 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:09:57.479 13:51:48 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:09:57.479 13:51:48 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:09:57.479 13:51:48 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:09:57.479 13:51:48 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:09:57.479 13:51:48 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:09:57.479 13:51:48 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:09:57.479 13:51:48 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:09:57.479 13:51:48 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:09:57.479 13:51:48 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:09:57.479 13:51:48 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:09:57.479 13:51:48 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:09:57.479 13:51:48 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:09:57.479 13:51:48 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:09:57.479 13:51:48 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:09:57.479 13:51:48 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:09:57.479 13:51:48 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:09:57.480 13:51:48 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:57.480 13:51:48 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:09:57.480 13:51:48 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:09:57.480 13:51:48 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:09:57.480 13:51:48 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:09:57.480 13:51:48 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:09:57.480 13:51:48 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:09:57.480 13:51:48 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:09:57.480 13:51:48 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:09:57.480 13:51:48 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:09:57.480 13:51:48 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:09:57.480 13:51:48 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:57.480 13:51:48 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:09:57.480 13:51:48 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:09:57.480 13:51:48 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:57.480 13:51:48 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:57.480 13:51:48 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:57.480 13:51:48 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:57.480 13:51:48 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:57.480 13:51:48 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:57.480 13:51:48 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:57.480 13:51:48 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:57.480 13:51:48 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:57.480 13:51:48 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:57.480 13:51:48 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:57.480 13:51:48 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:57.480 13:51:48 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:57.480 13:51:48 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:57.480 13:51:48 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:09:57.480 13:51:48 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:57.480 #define SPDK_CONFIG_H 00:09:57.480 #define SPDK_CONFIG_APPS 1 00:09:57.480 #define SPDK_CONFIG_ARCH native 00:09:57.480 #undef SPDK_CONFIG_ASAN 00:09:57.480 #undef SPDK_CONFIG_AVAHI 00:09:57.480 #undef SPDK_CONFIG_CET 00:09:57.480 #define SPDK_CONFIG_COVERAGE 1 00:09:57.480 #define SPDK_CONFIG_CROSS_PREFIX 00:09:57.480 #undef SPDK_CONFIG_CRYPTO 00:09:57.480 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:57.480 #undef SPDK_CONFIG_CUSTOMOCF 00:09:57.480 #undef SPDK_CONFIG_DAOS 00:09:57.480 #define SPDK_CONFIG_DAOS_DIR 00:09:57.480 #define SPDK_CONFIG_DEBUG 1 00:09:57.480 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:57.480 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:57.480 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:57.480 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:57.480 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:57.480 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:57.480 #define SPDK_CONFIG_EXAMPLES 1 00:09:57.480 #undef SPDK_CONFIG_FC 00:09:57.480 #define SPDK_CONFIG_FC_PATH 00:09:57.480 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:57.480 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:57.480 #undef SPDK_CONFIG_FUSE 00:09:57.480 #define SPDK_CONFIG_FUZZER 1 00:09:57.480 #define SPDK_CONFIG_FUZZER_LIB /usr/lib64/clang/16/lib/libclang_rt.fuzzer_no_main-x86_64.a 00:09:57.480 #undef SPDK_CONFIG_GOLANG 00:09:57.480 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:57.480 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:57.480 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:57.480 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:57.480 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:57.480 #define SPDK_CONFIG_IDXD 1 00:09:57.480 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:57.480 #undef SPDK_CONFIG_IPSEC_MB 00:09:57.480 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:57.480 #define SPDK_CONFIG_ISAL 1 00:09:57.480 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:57.480 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:57.480 #define SPDK_CONFIG_LIBDIR 00:09:57.480 #undef SPDK_CONFIG_LTO 00:09:57.480 #define SPDK_CONFIG_MAX_LCORES 00:09:57.480 #define SPDK_CONFIG_NVME_CUSE 1 00:09:57.480 #undef SPDK_CONFIG_OCF 00:09:57.480 #define SPDK_CONFIG_OCF_PATH 00:09:57.480 #define SPDK_CONFIG_OPENSSL_PATH 00:09:57.480 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:57.480 #undef SPDK_CONFIG_PGO_USE 00:09:57.480 #define SPDK_CONFIG_PREFIX /usr/local 00:09:57.480 #undef SPDK_CONFIG_RAID5F 00:09:57.480 #undef SPDK_CONFIG_RBD 00:09:57.480 #define SPDK_CONFIG_RDMA 1 00:09:57.480 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:57.480 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:57.480 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:57.480 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:57.480 #undef SPDK_CONFIG_SHARED 00:09:57.480 #undef SPDK_CONFIG_SMA 00:09:57.480 #define SPDK_CONFIG_TESTS 1 00:09:57.480 #undef SPDK_CONFIG_TSAN 00:09:57.480 #define SPDK_CONFIG_UBLK 1 00:09:57.480 #define SPDK_CONFIG_UBSAN 1 00:09:57.480 #undef SPDK_CONFIG_UNIT_TESTS 00:09:57.480 #undef SPDK_CONFIG_URING 00:09:57.480 #define SPDK_CONFIG_URING_PATH 00:09:57.480 #undef SPDK_CONFIG_URING_ZNS 00:09:57.480 #undef SPDK_CONFIG_USDT 00:09:57.480 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:57.480 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:57.480 #define SPDK_CONFIG_VFIO_USER 1 00:09:57.480 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:57.480 #define SPDK_CONFIG_VHOST 1 00:09:57.480 #define SPDK_CONFIG_VIRTIO 1 00:09:57.480 #undef SPDK_CONFIG_VTUNE 00:09:57.480 #define SPDK_CONFIG_VTUNE_DIR 00:09:57.480 #define SPDK_CONFIG_WERROR 1 00:09:57.480 #define SPDK_CONFIG_WPDK_DIR 00:09:57.480 #undef SPDK_CONFIG_XNVME 00:09:57.480 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:57.480 13:51:48 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:57.480 13:51:48 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:57.480 13:51:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.480 13:51:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.480 13:51:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.480 13:51:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.480 13:51:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.480 13:51:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.480 13:51:48 -- paths/export.sh@5 -- # export PATH 00:09:57.480 13:51:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.480 13:51:48 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:57.480 13:51:48 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:57.480 13:51:48 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:57.480 13:51:48 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:57.480 13:51:48 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:57.480 13:51:48 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:57.480 13:51:48 -- pm/common@16 -- # TEST_TAG=N/A 00:09:57.480 13:51:48 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:09:57.480 13:51:48 -- common/autotest_common.sh@52 -- # : 1 00:09:57.480 13:51:48 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:09:57.481 13:51:48 -- common/autotest_common.sh@56 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:57.481 13:51:48 -- common/autotest_common.sh@58 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:09:57.481 13:51:48 -- common/autotest_common.sh@60 -- # : 1 00:09:57.481 13:51:48 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:57.481 13:51:48 -- common/autotest_common.sh@62 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:09:57.481 13:51:48 -- common/autotest_common.sh@64 -- # : 00:09:57.481 13:51:48 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:09:57.481 13:51:48 -- common/autotest_common.sh@66 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:09:57.481 13:51:48 -- common/autotest_common.sh@68 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:09:57.481 13:51:48 -- common/autotest_common.sh@70 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:09:57.481 13:51:48 -- common/autotest_common.sh@72 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:57.481 13:51:48 -- common/autotest_common.sh@74 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:09:57.481 13:51:48 -- common/autotest_common.sh@76 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:09:57.481 13:51:48 -- common/autotest_common.sh@78 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:09:57.481 13:51:48 -- common/autotest_common.sh@80 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:09:57.481 13:51:48 -- common/autotest_common.sh@82 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:09:57.481 13:51:48 -- common/autotest_common.sh@84 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:09:57.481 13:51:48 -- common/autotest_common.sh@86 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:09:57.481 13:51:48 -- common/autotest_common.sh@88 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:09:57.481 13:51:48 -- common/autotest_common.sh@90 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:57.481 13:51:48 -- common/autotest_common.sh@92 -- # : 1 00:09:57.481 13:51:48 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:09:57.481 13:51:48 -- common/autotest_common.sh@94 -- # : 1 00:09:57.481 13:51:48 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:09:57.481 13:51:48 -- common/autotest_common.sh@96 -- # : rdma 00:09:57.481 13:51:48 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:57.481 13:51:48 -- common/autotest_common.sh@98 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:09:57.481 13:51:48 -- common/autotest_common.sh@100 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:09:57.481 13:51:48 -- common/autotest_common.sh@102 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:09:57.481 13:51:48 -- common/autotest_common.sh@104 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:09:57.481 13:51:48 -- common/autotest_common.sh@106 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:09:57.481 13:51:48 -- common/autotest_common.sh@108 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:09:57.481 13:51:48 -- common/autotest_common.sh@110 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:09:57.481 13:51:48 -- common/autotest_common.sh@112 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:57.481 13:51:48 -- common/autotest_common.sh@114 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:09:57.481 13:51:48 -- common/autotest_common.sh@116 -- # : 1 00:09:57.481 13:51:48 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:09:57.481 13:51:48 -- common/autotest_common.sh@118 -- # : 00:09:57.481 13:51:48 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:57.481 13:51:48 -- common/autotest_common.sh@120 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:09:57.481 13:51:48 -- common/autotest_common.sh@122 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:09:57.481 13:51:48 -- common/autotest_common.sh@124 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:09:57.481 13:51:48 -- common/autotest_common.sh@126 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:09:57.481 13:51:48 -- common/autotest_common.sh@128 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:09:57.481 13:51:48 -- common/autotest_common.sh@130 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:09:57.481 13:51:48 -- common/autotest_common.sh@132 -- # : 00:09:57.481 13:51:48 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:09:57.481 13:51:48 -- common/autotest_common.sh@134 -- # : true 00:09:57.481 13:51:48 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:09:57.481 13:51:48 -- common/autotest_common.sh@136 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:09:57.481 13:51:48 -- common/autotest_common.sh@138 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:09:57.481 13:51:48 -- common/autotest_common.sh@140 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:09:57.481 13:51:48 -- common/autotest_common.sh@142 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:09:57.481 13:51:48 -- common/autotest_common.sh@144 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:09:57.481 13:51:48 -- common/autotest_common.sh@146 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:09:57.481 13:51:48 -- common/autotest_common.sh@148 -- # : 00:09:57.481 13:51:48 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:09:57.481 13:51:48 -- common/autotest_common.sh@150 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:09:57.481 13:51:48 -- common/autotest_common.sh@152 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:09:57.481 13:51:48 -- common/autotest_common.sh@154 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:09:57.481 13:51:48 -- common/autotest_common.sh@156 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:09:57.481 13:51:48 -- common/autotest_common.sh@158 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:09:57.481 13:51:48 -- common/autotest_common.sh@160 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:09:57.481 13:51:48 -- common/autotest_common.sh@163 -- # : 00:09:57.481 13:51:48 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:09:57.481 13:51:48 -- common/autotest_common.sh@165 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:09:57.481 13:51:48 -- common/autotest_common.sh@167 -- # : 0 00:09:57.481 13:51:48 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:57.481 13:51:48 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:57.481 13:51:48 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:57.481 13:51:48 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:57.481 13:51:48 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:57.481 13:51:48 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.481 13:51:48 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.481 13:51:48 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.482 13:51:48 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:57.482 13:51:48 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:57.482 13:51:48 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:57.482 13:51:48 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:57.482 13:51:48 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:57.482 13:51:48 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:57.482 13:51:48 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:09:57.482 13:51:48 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:57.482 13:51:48 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:57.482 13:51:48 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:57.482 13:51:48 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:57.482 13:51:48 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:57.482 13:51:48 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:09:57.482 13:51:48 -- common/autotest_common.sh@196 -- # cat 00:09:57.482 13:51:48 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:09:57.482 13:51:48 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:57.482 13:51:48 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:57.482 13:51:48 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:57.482 13:51:48 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:57.482 13:51:48 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:09:57.482 13:51:48 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:09:57.482 13:51:48 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:57.482 13:51:48 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:57.482 13:51:48 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:57.482 13:51:48 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:57.482 13:51:48 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:57.482 13:51:48 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:57.482 13:51:48 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:57.482 13:51:48 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:57.482 13:51:48 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:57.482 13:51:48 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:57.482 13:51:48 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:57.482 13:51:48 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:57.482 13:51:48 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:09:57.482 13:51:48 -- common/autotest_common.sh@249 -- # export valgrind= 00:09:57.482 13:51:48 -- common/autotest_common.sh@249 -- # valgrind= 00:09:57.482 13:51:48 -- common/autotest_common.sh@255 -- # uname -s 00:09:57.482 13:51:48 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:09:57.482 13:51:48 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:09:57.482 13:51:48 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:09:57.482 13:51:48 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:09:57.482 13:51:48 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:09:57.482 13:51:48 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:09:57.482 13:51:48 -- common/autotest_common.sh@265 -- # MAKE=make 00:09:57.482 13:51:48 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j72 00:09:57.482 13:51:48 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:09:57.482 13:51:48 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:09:57.482 13:51:48 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:09:57.482 13:51:48 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:09:57.482 13:51:48 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:09:57.482 13:51:48 -- common/autotest_common.sh@309 -- # [[ -z 3918751 ]] 00:09:57.482 13:51:48 -- common/autotest_common.sh@309 -- # kill -0 3918751 00:09:57.482 13:51:48 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:09:57.482 13:51:48 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:09:57.482 13:51:48 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:09:57.482 13:51:48 -- common/autotest_common.sh@322 -- # local mount target_dir 00:09:57.482 13:51:48 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:09:57.482 13:51:48 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:09:57.482 13:51:48 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:09:57.482 13:51:48 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:09:57.482 13:51:48 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.LKH1h6 00:09:57.482 13:51:48 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:57.482 13:51:48 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:09:57.482 13:51:48 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:09:57.482 13:51:48 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.LKH1h6/tests/vfio /tmp/spdk.LKH1h6 00:09:57.482 13:51:48 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:09:57.482 13:51:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:57.482 13:51:48 -- common/autotest_common.sh@318 -- # df -T 00:09:57.482 13:51:48 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:09:57.482 13:51:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:09:57.482 13:51:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:09:57.482 13:51:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:09:57.482 13:51:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:09:57.482 13:51:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:09:57.482 13:51:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:57.482 13:51:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:09:57.482 13:51:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:09:57.482 13:51:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=893108224 00:09:57.482 13:51:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:09:57.482 13:51:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=4391321600 00:09:57.482 13:51:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:57.482 13:51:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:09:57.482 13:51:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:09:57.482 13:51:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=82051739648 00:09:57.482 13:51:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=94508572672 00:09:57.482 13:51:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=12456833024 00:09:57.482 13:51:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:57.482 13:51:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:57.482 13:51:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:57.482 13:51:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=47200768000 00:09:57.482 13:51:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254286336 00:09:57.482 13:51:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:09:57.482 13:51:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:57.482 13:51:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:57.482 13:51:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:57.482 13:51:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=18895626240 00:09:57.482 13:51:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=18901716992 00:09:57.482 13:51:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=6090752 00:09:57.482 13:51:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:57.482 13:51:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:57.483 13:51:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:57.483 13:51:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=47253069824 00:09:57.483 13:51:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254286336 00:09:57.483 13:51:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=1216512 00:09:57.483 13:51:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:57.483 13:51:48 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:09:57.483 13:51:48 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:09:57.483 13:51:48 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450852352 00:09:57.483 13:51:48 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450856448 00:09:57.483 13:51:48 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:09:57.483 13:51:48 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:09:57.483 13:51:48 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:09:57.483 * Looking for test storage... 00:09:57.483 13:51:48 -- common/autotest_common.sh@359 -- # local target_space new_size 00:09:57.483 13:51:48 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:09:57.483 13:51:48 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:57.483 13:51:48 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:57.483 13:51:48 -- common/autotest_common.sh@363 -- # mount=/ 00:09:57.483 13:51:48 -- common/autotest_common.sh@365 -- # target_space=82051739648 00:09:57.483 13:51:48 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:09:57.483 13:51:48 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:09:57.483 13:51:48 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:09:57.483 13:51:48 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:09:57.483 13:51:48 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:09:57.483 13:51:48 -- common/autotest_common.sh@372 -- # new_size=14671425536 00:09:57.483 13:51:48 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:57.483 13:51:48 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:57.483 13:51:48 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:57.483 13:51:48 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:57.483 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:09:57.483 13:51:48 -- common/autotest_common.sh@380 -- # return 0 00:09:57.483 13:51:48 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:09:57.483 13:51:48 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:09:57.483 13:51:48 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:57.483 13:51:48 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:57.483 13:51:48 -- common/autotest_common.sh@1672 -- # true 00:09:57.483 13:51:48 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:09:57.483 13:51:48 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:57.483 13:51:48 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:57.483 13:51:48 -- common/autotest_common.sh@27 -- # exec 00:09:57.483 13:51:48 -- common/autotest_common.sh@29 -- # exec 00:09:57.483 13:51:48 -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:57.483 13:51:48 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:57.483 13:51:48 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:57.483 13:51:48 -- common/autotest_common.sh@18 -- # set -x 00:09:57.483 13:51:48 -- vfio/run.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:09:57.483 13:51:48 -- ../common.sh@8 -- # pids=() 00:09:57.483 13:51:48 -- vfio/run.sh@58 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:57.483 13:51:48 -- vfio/run.sh@59 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:09:57.483 13:51:48 -- vfio/run.sh@59 -- # fuzz_num=7 00:09:57.483 13:51:48 -- vfio/run.sh@60 -- # (( fuzz_num != 0 )) 00:09:57.483 13:51:48 -- vfio/run.sh@62 -- # trap 'cleanup /tmp/vfio-user-*; exit 1' SIGINT SIGTERM EXIT 00:09:57.483 13:51:48 -- vfio/run.sh@65 -- # mem_size=0 00:09:57.483 13:51:48 -- vfio/run.sh@66 -- # [[ 1 -eq 1 ]] 00:09:57.483 13:51:48 -- vfio/run.sh@67 -- # start_llvm_fuzz_short 7 1 00:09:57.483 13:51:48 -- ../common.sh@69 -- # local fuzz_num=7 00:09:57.483 13:51:48 -- ../common.sh@70 -- # local time=1 00:09:57.483 13:51:48 -- ../common.sh@72 -- # (( i = 0 )) 00:09:57.483 13:51:48 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:57.483 13:51:48 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:09:57.483 13:51:48 -- vfio/run.sh@22 -- # local fuzzer_type=0 00:09:57.483 13:51:48 -- vfio/run.sh@23 -- # local timen=1 00:09:57.483 13:51:48 -- vfio/run.sh@24 -- # local core=0x1 00:09:57.483 13:51:48 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:57.483 13:51:48 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:09:57.483 13:51:48 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:09:57.483 13:51:48 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:09:57.483 13:51:48 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:09:57.483 13:51:48 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:57.483 13:51:48 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:09:57.483 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:09:57.483 13:51:48 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:09:57.483 [2024-07-23 13:51:48.475163] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:57.483 [2024-07-23 13:51:48.475265] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918791 ] 00:09:57.742 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.742 [2024-07-23 13:51:48.588776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.742 [2024-07-23 13:51:48.687944] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:57.742 [2024-07-23 13:51:48.688156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.000 INFO: Running with entropic power schedule (0xFF, 100). 00:09:58.000 INFO: Seed: 3177998645 00:09:58.000 INFO: Loaded 1 modules (338583 inline 8-bit counters): 338583 [0x27cc1cc, 0x281ec63), 00:09:58.000 INFO: Loaded 1 PC tables (338583 PCs): 338583 [0x281ec68,0x2d495d8), 00:09:58.000 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:09:58.000 INFO: A corpus is not provided, starting from an empty corpus 00:09:58.000 #2 INITED exec/s: 0 rss: 61Mb 00:09:58.000 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:58.000 This may also happen if the target rejected all inputs we tried so far 00:09:58.824 NEW_FUNC[1/626]: 0x4806f0 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:85 00:09:58.824 NEW_FUNC[2/626]: 0x486290 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:09:58.824 #11 NEW cov: 10617 ft: 10675 corp: 2/41b lim: 60 exec/s: 0 rss: 68Mb L: 40/40 MS: 4 InsertByte-ChangeByte-InsertByte-InsertRepeatedBytes- 00:09:58.824 NEW_FUNC[1/6]: 0x12608d0 in nvmf_transport_req_complete /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:735 00:09:58.824 NEW_FUNC[2/6]: 0x192aaa0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:09:58.824 #12 NEW cov: 10747 ft: 14175 corp: 3/81b lim: 60 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 ChangeByte- 00:09:59.082 #18 NEW cov: 10747 ft: 14498 corp: 4/127b lim: 60 exec/s: 18 rss: 70Mb L: 46/46 MS: 1 InsertRepeatedBytes- 00:09:59.340 #25 NEW cov: 10747 ft: 14649 corp: 5/161b lim: 60 exec/s: 25 rss: 70Mb L: 34/46 MS: 2 ChangeByte-InsertRepeatedBytes- 00:09:59.599 #26 NEW cov: 10747 ft: 15389 corp: 6/195b lim: 60 exec/s: 26 rss: 70Mb L: 34/46 MS: 1 ChangeBit- 00:09:59.857 #27 NEW cov: 10747 ft: 16100 corp: 7/234b lim: 60 exec/s: 27 rss: 70Mb L: 39/46 MS: 1 CrossOver- 00:09:59.857 #38 NEW cov: 10754 ft: 16364 corp: 8/267b lim: 60 exec/s: 38 rss: 70Mb L: 33/46 MS: 1 EraseBytes- 00:10:00.115 #39 NEW cov: 10754 ft: 16427 corp: 9/315b lim: 60 exec/s: 19 rss: 70Mb L: 48/48 MS: 1 CMP- DE: "\377\377\377\377\377\377\377p"- 00:10:00.115 #39 DONE cov: 10754 ft: 16427 corp: 9/315b lim: 60 exec/s: 19 rss: 70Mb 00:10:00.115 ###### Recommended dictionary. ###### 00:10:00.115 "\377\377\377\377\377\377\377p" # Uses: 0 00:10:00.115 ###### End of recommended dictionary. ###### 00:10:00.115 Done 39 runs in 2 second(s) 00:10:00.374 13:51:51 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-0 00:10:00.374 13:51:51 -- ../common.sh@72 -- # (( i++ )) 00:10:00.374 13:51:51 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:00.374 13:51:51 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:10:00.374 13:51:51 -- vfio/run.sh@22 -- # local fuzzer_type=1 00:10:00.374 13:51:51 -- vfio/run.sh@23 -- # local timen=1 00:10:00.374 13:51:51 -- vfio/run.sh@24 -- # local core=0x1 00:10:00.374 13:51:51 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:10:00.374 13:51:51 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:10:00.374 13:51:51 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:10:00.374 13:51:51 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:10:00.374 13:51:51 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:10:00.374 13:51:51 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:10:00.374 13:51:51 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:10:00.374 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:00.375 13:51:51 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:10:00.633 [2024-07-23 13:51:51.404656] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:00.633 [2024-07-23 13:51:51.404738] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919159 ] 00:10:00.633 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.633 [2024-07-23 13:51:51.534012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.633 [2024-07-23 13:51:51.636366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:00.633 [2024-07-23 13:51:51.636575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.892 INFO: Running with entropic power schedule (0xFF, 100). 00:10:00.892 INFO: Seed: 1835047276 00:10:00.892 INFO: Loaded 1 modules (338583 inline 8-bit counters): 338583 [0x27cc1cc, 0x281ec63), 00:10:00.892 INFO: Loaded 1 PC tables (338583 PCs): 338583 [0x281ec68,0x2d495d8), 00:10:00.892 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:10:00.892 INFO: A corpus is not provided, starting from an empty corpus 00:10:00.892 #2 INITED exec/s: 0 rss: 62Mb 00:10:00.892 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:00.892 This may also happen if the target rejected all inputs we tried so far 00:10:01.150 [2024-07-23 13:51:52.090713] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:01.150 [2024-07-23 13:51:52.090754] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:01.150 [2024-07-23 13:51:52.090792] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:01.666 NEW_FUNC[1/638]: 0x480c90 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:72 00:10:01.666 NEW_FUNC[2/638]: 0x486290 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:01.666 #9 NEW cov: 10731 ft: 10660 corp: 2/18b lim: 40 exec/s: 0 rss: 68Mb L: 17/17 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:10:01.924 [2024-07-23 13:51:52.761113] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:01.924 [2024-07-23 13:51:52.761164] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:01.924 [2024-07-23 13:51:52.761189] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:01.924 #10 NEW cov: 10745 ft: 13074 corp: 3/24b lim: 40 exec/s: 10 rss: 69Mb L: 6/17 MS: 1 InsertRepeatedBytes- 00:10:02.182 [2024-07-23 13:51:53.002948] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:02.182 [2024-07-23 13:51:53.002980] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:02.182 [2024-07-23 13:51:53.003005] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:02.182 #11 NEW cov: 10745 ft: 14433 corp: 4/33b lim: 40 exec/s: 11 rss: 69Mb L: 9/17 MS: 1 CrossOver- 00:10:02.440 [2024-07-23 13:51:53.243509] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:02.440 [2024-07-23 13:51:53.243541] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:02.440 [2024-07-23 13:51:53.243566] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:02.440 #12 NEW cov: 10745 ft: 15108 corp: 5/39b lim: 40 exec/s: 12 rss: 69Mb L: 6/17 MS: 1 ChangeBinInt- 00:10:02.697 [2024-07-23 13:51:53.484983] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:02.697 [2024-07-23 13:51:53.485013] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:02.697 [2024-07-23 13:51:53.485037] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:02.697 #17 NEW cov: 10745 ft: 15826 corp: 6/43b lim: 40 exec/s: 17 rss: 69Mb L: 4/17 MS: 5 ChangeBit-ShuffleBytes-CMP-ShuffleBytes-CopyPart- DE: "\013\000"- 00:10:02.955 [2024-07-23 13:51:53.726513] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:02.955 [2024-07-23 13:51:53.726544] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:02.955 [2024-07-23 13:51:53.726569] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:02.955 NEW_FUNC[1/1]: 0x192aaa0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:10:02.955 #18 NEW cov: 10768 ft: 16104 corp: 7/49b lim: 40 exec/s: 18 rss: 70Mb L: 6/17 MS: 1 ChangeBinInt- 00:10:02.955 [2024-07-23 13:51:53.968380] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:02.955 [2024-07-23 13:51:53.968411] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:02.955 [2024-07-23 13:51:53.968435] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:03.213 #19 NEW cov: 10768 ft: 16132 corp: 8/56b lim: 40 exec/s: 9 rss: 70Mb L: 7/17 MS: 1 CrossOver- 00:10:03.213 #19 DONE cov: 10768 ft: 16132 corp: 8/56b lim: 40 exec/s: 9 rss: 70Mb 00:10:03.213 ###### Recommended dictionary. ###### 00:10:03.213 "\013\000" # Uses: 0 00:10:03.213 ###### End of recommended dictionary. ###### 00:10:03.214 Done 19 runs in 2 second(s) 00:10:03.472 13:51:54 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-1 00:10:03.472 13:51:54 -- ../common.sh@72 -- # (( i++ )) 00:10:03.472 13:51:54 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:03.472 13:51:54 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:10:03.472 13:51:54 -- vfio/run.sh@22 -- # local fuzzer_type=2 00:10:03.472 13:51:54 -- vfio/run.sh@23 -- # local timen=1 00:10:03.472 13:51:54 -- vfio/run.sh@24 -- # local core=0x1 00:10:03.472 13:51:54 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:10:03.472 13:51:54 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:10:03.473 13:51:54 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:10:03.473 13:51:54 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:10:03.473 13:51:54 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:10:03.473 13:51:54 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:10:03.473 13:51:54 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:10:03.473 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:03.473 13:51:54 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:10:03.473 [2024-07-23 13:51:54.491351] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:03.473 [2024-07-23 13:51:54.491430] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919531 ] 00:10:03.731 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.731 [2024-07-23 13:51:54.605936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.731 [2024-07-23 13:51:54.704345] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:03.732 [2024-07-23 13:51:54.704558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.990 INFO: Running with entropic power schedule (0xFF, 100). 00:10:03.990 INFO: Seed: 606064927 00:10:03.990 INFO: Loaded 1 modules (338583 inline 8-bit counters): 338583 [0x27cc1cc, 0x281ec63), 00:10:03.990 INFO: Loaded 1 PC tables (338583 PCs): 338583 [0x281ec68,0x2d495d8), 00:10:03.990 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:10:03.990 INFO: A corpus is not provided, starting from an empty corpus 00:10:03.990 #2 INITED exec/s: 0 rss: 61Mb 00:10:03.990 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:03.990 This may also happen if the target rejected all inputs we tried so far 00:10:04.248 [2024-07-23 13:51:55.062670] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:04.814 NEW_FUNC[1/636]: 0x481670 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:104 00:10:04.814 NEW_FUNC[2/636]: 0x486290 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:04.814 #8 NEW cov: 10708 ft: 10675 corp: 2/25b lim: 80 exec/s: 0 rss: 68Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:10:04.814 [2024-07-23 13:51:55.732520] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:05.071 NEW_FUNC[1/1]: 0x192aaa0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:10:05.072 #9 NEW cov: 10742 ft: 13712 corp: 3/95b lim: 80 exec/s: 0 rss: 69Mb L: 70/70 MS: 1 InsertRepeatedBytes- 00:10:05.072 [2024-07-23 13:51:55.960499] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:05.329 #10 NEW cov: 10742 ft: 15218 corp: 4/119b lim: 80 exec/s: 10 rss: 70Mb L: 24/70 MS: 1 ChangeBinInt- 00:10:05.329 [2024-07-23 13:51:56.186326] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:05.329 #11 NEW cov: 10742 ft: 15402 corp: 5/138b lim: 80 exec/s: 11 rss: 70Mb L: 19/70 MS: 1 EraseBytes- 00:10:05.587 [2024-07-23 13:51:56.412214] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:05.587 #12 NEW cov: 10742 ft: 16041 corp: 6/162b lim: 80 exec/s: 12 rss: 70Mb L: 24/70 MS: 1 ChangeBinInt- 00:10:05.846 [2024-07-23 13:51:56.637757] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:05.846 #13 NEW cov: 10749 ft: 16431 corp: 7/186b lim: 80 exec/s: 13 rss: 70Mb L: 24/70 MS: 1 ChangeBinInt- 00:10:05.846 [2024-07-23 13:51:56.863240] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:06.104 #14 NEW cov: 10749 ft: 16541 corp: 8/210b lim: 80 exec/s: 7 rss: 70Mb L: 24/70 MS: 1 ChangeBit- 00:10:06.104 #14 DONE cov: 10749 ft: 16541 corp: 8/210b lim: 80 exec/s: 7 rss: 70Mb 00:10:06.104 Done 14 runs in 2 second(s) 00:10:06.362 13:51:57 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-2 00:10:06.362 13:51:57 -- ../common.sh@72 -- # (( i++ )) 00:10:06.362 13:51:57 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:06.362 13:51:57 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:10:06.362 13:51:57 -- vfio/run.sh@22 -- # local fuzzer_type=3 00:10:06.362 13:51:57 -- vfio/run.sh@23 -- # local timen=1 00:10:06.362 13:51:57 -- vfio/run.sh@24 -- # local core=0x1 00:10:06.362 13:51:57 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:06.362 13:51:57 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:10:06.362 13:51:57 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:10:06.362 13:51:57 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:10:06.362 13:51:57 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:10:06.363 13:51:57 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:06.363 13:51:57 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:10:06.363 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:06.363 13:51:57 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:10:06.363 [2024-07-23 13:51:57.377417] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:06.363 [2024-07-23 13:51:57.377499] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919966 ] 00:10:06.621 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.621 [2024-07-23 13:51:57.508053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.621 [2024-07-23 13:51:57.607000] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:06.621 [2024-07-23 13:51:57.607209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.879 INFO: Running with entropic power schedule (0xFF, 100). 00:10:06.879 INFO: Seed: 3503079839 00:10:06.879 INFO: Loaded 1 modules (338583 inline 8-bit counters): 338583 [0x27cc1cc, 0x281ec63), 00:10:06.879 INFO: Loaded 1 PC tables (338583 PCs): 338583 [0x281ec68,0x2d495d8), 00:10:06.879 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:06.879 INFO: A corpus is not provided, starting from an empty corpus 00:10:06.879 #2 INITED exec/s: 0 rss: 61Mb 00:10:06.879 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:06.879 This may also happen if the target rejected all inputs we tried so far 00:10:07.395 NEW_FUNC[1/632]: 0x481d50 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:125 00:10:07.395 NEW_FUNC[2/632]: 0x486290 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:07.395 #11 NEW cov: 10698 ft: 10658 corp: 2/80b lim: 320 exec/s: 0 rss: 68Mb L: 79/79 MS: 4 ShuffleBytes-InsertByte-EraseBytes-InsertRepeatedBytes- 00:10:07.653 NEW_FUNC[1/1]: 0x192aaa0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:10:07.653 #13 NEW cov: 10734 ft: 13929 corp: 3/205b lim: 320 exec/s: 0 rss: 69Mb L: 125/125 MS: 2 InsertByte-InsertRepeatedBytes- 00:10:07.913 #14 NEW cov: 10734 ft: 14913 corp: 4/330b lim: 320 exec/s: 14 rss: 70Mb L: 125/125 MS: 1 ChangeBinInt- 00:10:08.214 #15 NEW cov: 10734 ft: 15460 corp: 5/468b lim: 320 exec/s: 15 rss: 70Mb L: 138/138 MS: 1 InsertRepeatedBytes- 00:10:08.485 #16 NEW cov: 10734 ft: 15544 corp: 6/529b lim: 320 exec/s: 16 rss: 70Mb L: 61/138 MS: 1 EraseBytes- 00:10:08.742 #17 NEW cov: 10734 ft: 15924 corp: 7/598b lim: 320 exec/s: 17 rss: 70Mb L: 69/138 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\200"- 00:10:08.742 #18 NEW cov: 10741 ft: 15972 corp: 8/736b lim: 320 exec/s: 18 rss: 70Mb L: 138/138 MS: 1 ShuffleBytes- 00:10:09.001 #19 NEW cov: 10741 ft: 16345 corp: 9/869b lim: 320 exec/s: 9 rss: 70Mb L: 133/138 MS: 1 InsertRepeatedBytes- 00:10:09.001 #19 DONE cov: 10741 ft: 16345 corp: 9/869b lim: 320 exec/s: 9 rss: 70Mb 00:10:09.001 ###### Recommended dictionary. ###### 00:10:09.001 "\001\000\000\000\000\000\000\200" # Uses: 0 00:10:09.001 ###### End of recommended dictionary. ###### 00:10:09.001 Done 19 runs in 2 second(s) 00:10:09.567 13:52:00 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-3 00:10:09.567 13:52:00 -- ../common.sh@72 -- # (( i++ )) 00:10:09.567 13:52:00 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:09.567 13:52:00 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:10:09.567 13:52:00 -- vfio/run.sh@22 -- # local fuzzer_type=4 00:10:09.567 13:52:00 -- vfio/run.sh@23 -- # local timen=1 00:10:09.568 13:52:00 -- vfio/run.sh@24 -- # local core=0x1 00:10:09.568 13:52:00 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:09.568 13:52:00 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:10:09.568 13:52:00 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:10:09.568 13:52:00 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:10:09.568 13:52:00 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:10:09.568 13:52:00 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:09.568 13:52:00 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:10:09.568 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:09.568 13:52:00 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:10:09.568 [2024-07-23 13:52:00.339359] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:09.568 [2024-07-23 13:52:00.339438] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920435 ] 00:10:09.568 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.568 [2024-07-23 13:52:00.454488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.568 [2024-07-23 13:52:00.554021] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:09.568 [2024-07-23 13:52:00.554240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.826 INFO: Running with entropic power schedule (0xFF, 100). 00:10:09.826 INFO: Seed: 2151085967 00:10:09.826 INFO: Loaded 1 modules (338583 inline 8-bit counters): 338583 [0x27cc1cc, 0x281ec63), 00:10:09.826 INFO: Loaded 1 PC tables (338583 PCs): 338583 [0x281ec68,0x2d495d8), 00:10:09.826 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:09.826 INFO: A corpus is not provided, starting from an empty corpus 00:10:09.826 #2 INITED exec/s: 0 rss: 61Mb 00:10:09.826 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:09.826 This may also happen if the target rejected all inputs we tried so far 00:10:10.649 NEW_FUNC[1/632]: 0x4825d0 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:145 00:10:10.649 NEW_FUNC[2/632]: 0x486290 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:10.649 #18 NEW cov: 10702 ft: 10677 corp: 2/73b lim: 320 exec/s: 0 rss: 68Mb L: 72/72 MS: 1 InsertRepeatedBytes- 00:10:10.649 [2024-07-23 13:52:01.543355] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0 prot=0x3: Invalid argument 00:10:10.649 [2024-07-23 13:52:01.543414] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0 flags=0x3: Invalid argument 00:10:10.649 [2024-07-23 13:52:01.543431] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:10.649 [2024-07-23 13:52:01.543456] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:10.649 [2024-07-23 13:52:01.544350] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:10:10.649 [2024-07-23 13:52:01.544374] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:10.649 [2024-07-23 13:52:01.544396] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:10.906 NEW_FUNC[1/7]: 0x13323e0 in endpoint_id /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:638 00:10:10.906 NEW_FUNC[2/7]: 0x1332670 in vfio_user_log /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:3084 00:10:10.906 #24 NEW cov: 10768 ft: 13576 corp: 3/202b lim: 320 exec/s: 0 rss: 69Mb L: 129/129 MS: 1 InsertRepeatedBytes- 00:10:10.906 #25 NEW cov: 10768 ft: 15288 corp: 4/274b lim: 320 exec/s: 25 rss: 70Mb L: 72/129 MS: 1 ChangeByte- 00:10:11.164 [2024-07-23 13:52:02.004911] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0x1717171717000000 prot=0x3: Invalid argument 00:10:11.164 [2024-07-23 13:52:02.004947] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x1717171717000000 flags=0x3: Invalid argument 00:10:11.164 [2024-07-23 13:52:02.004964] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:11.164 [2024-07-23 13:52:02.004988] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:11.164 [2024-07-23 13:52:02.005901] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:10:11.164 [2024-07-23 13:52:02.005927] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:11.164 [2024-07-23 13:52:02.005950] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:11.164 #26 NEW cov: 10768 ft: 15424 corp: 5/458b lim: 320 exec/s: 26 rss: 70Mb L: 184/184 MS: 1 InsertRepeatedBytes- 00:10:11.422 [2024-07-23 13:52:02.224162] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0x1717171717000000 prot=0x3: Invalid argument 00:10:11.422 [2024-07-23 13:52:02.224194] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x1717171717000000 flags=0x3: Invalid argument 00:10:11.422 [2024-07-23 13:52:02.224210] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:11.422 [2024-07-23 13:52:02.224253] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:11.422 [2024-07-23 13:52:02.225155] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:10:11.422 [2024-07-23 13:52:02.225180] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:11.422 [2024-07-23 13:52:02.225203] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:11.422 #32 NEW cov: 10768 ft: 15487 corp: 6/642b lim: 320 exec/s: 32 rss: 70Mb L: 184/184 MS: 1 CrossOver- 00:10:11.679 #38 NEW cov: 10775 ft: 15743 corp: 7/714b lim: 320 exec/s: 38 rss: 70Mb L: 72/184 MS: 1 ChangeBinInt- 00:10:11.679 [2024-07-23 13:52:02.663790] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=325 offset=0x1717171717000000 prot=0x3: Invalid argument 00:10:11.679 [2024-07-23 13:52:02.663824] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x1717171717000000 flags=0x3: Invalid argument 00:10:11.679 [2024-07-23 13:52:02.663845] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:11.679 [2024-07-23 13:52:02.663868] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:11.679 [2024-07-23 13:52:02.664786] vfio_user.c:3094:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:10:11.679 [2024-07-23 13:52:02.664811] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:11.679 [2024-07-23 13:52:02.664835] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:11.937 #44 NEW cov: 10775 ft: 16267 corp: 8/899b lim: 320 exec/s: 22 rss: 70Mb L: 185/185 MS: 1 InsertByte- 00:10:11.938 #44 DONE cov: 10775 ft: 16267 corp: 8/899b lim: 320 exec/s: 22 rss: 70Mb 00:10:11.938 Done 44 runs in 2 second(s) 00:10:12.196 13:52:03 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-4 00:10:12.196 13:52:03 -- ../common.sh@72 -- # (( i++ )) 00:10:12.196 13:52:03 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:12.196 13:52:03 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:10:12.196 13:52:03 -- vfio/run.sh@22 -- # local fuzzer_type=5 00:10:12.196 13:52:03 -- vfio/run.sh@23 -- # local timen=1 00:10:12.196 13:52:03 -- vfio/run.sh@24 -- # local core=0x1 00:10:12.196 13:52:03 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:12.196 13:52:03 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:10:12.196 13:52:03 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:10:12.196 13:52:03 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:10:12.196 13:52:03 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:10:12.196 13:52:03 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:12.196 13:52:03 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:10:12.196 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:12.196 13:52:03 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:10:12.196 [2024-07-23 13:52:03.180482] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:12.196 [2024-07-23 13:52:03.180582] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920797 ] 00:10:12.455 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.455 [2024-07-23 13:52:03.311365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.455 [2024-07-23 13:52:03.412154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:12.455 [2024-07-23 13:52:03.412369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.712 INFO: Running with entropic power schedule (0xFF, 100). 00:10:12.712 INFO: Seed: 714125616 00:10:12.712 INFO: Loaded 1 modules (338583 inline 8-bit counters): 338583 [0x27cc1cc, 0x281ec63), 00:10:12.712 INFO: Loaded 1 PC tables (338583 PCs): 338583 [0x281ec68,0x2d495d8), 00:10:12.712 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:12.712 INFO: A corpus is not provided, starting from an empty corpus 00:10:12.712 #2 INITED exec/s: 0 rss: 61Mb 00:10:12.712 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:12.712 This may also happen if the target rejected all inputs we tried so far 00:10:12.712 [2024-07-23 13:52:03.727244] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:12.712 [2024-07-23 13:52:03.727306] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:13.230 NEW_FUNC[1/638]: 0x482fd0 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:172 00:10:13.230 NEW_FUNC[2/638]: 0x486290 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:13.230 #13 NEW cov: 10733 ft: 10702 corp: 2/25b lim: 120 exec/s: 0 rss: 68Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:10:13.230 [2024-07-23 13:52:04.198916] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:13.230 [2024-07-23 13:52:04.198987] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:13.488 #14 NEW cov: 10747 ft: 13395 corp: 3/50b lim: 120 exec/s: 0 rss: 69Mb L: 25/25 MS: 1 InsertByte- 00:10:13.488 [2024-07-23 13:52:04.362523] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:13.488 [2024-07-23 13:52:04.362576] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:13.488 NEW_FUNC[1/1]: 0x192aaa0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:10:13.488 #19 NEW cov: 10764 ft: 14642 corp: 4/158b lim: 120 exec/s: 0 rss: 70Mb L: 108/108 MS: 5 ChangeBit-ChangeBit-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:10:13.747 [2024-07-23 13:52:04.545219] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:13.747 [2024-07-23 13:52:04.545269] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:13.747 #20 NEW cov: 10764 ft: 14984 corp: 5/216b lim: 120 exec/s: 20 rss: 70Mb L: 58/108 MS: 1 CrossOver- 00:10:13.747 [2024-07-23 13:52:04.697795] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:13.747 [2024-07-23 13:52:04.697843] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.006 #21 NEW cov: 10764 ft: 15647 corp: 6/329b lim: 120 exec/s: 21 rss: 70Mb L: 113/113 MS: 1 InsertRepeatedBytes- 00:10:14.006 [2024-07-23 13:52:04.860198] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:14.006 [2024-07-23 13:52:04.860258] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.006 #22 NEW cov: 10764 ft: 15892 corp: 7/343b lim: 120 exec/s: 22 rss: 70Mb L: 14/113 MS: 1 EraseBytes- 00:10:14.006 [2024-07-23 13:52:05.022628] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:14.006 [2024-07-23 13:52:05.022675] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.264 #23 NEW cov: 10764 ft: 15975 corp: 8/460b lim: 120 exec/s: 23 rss: 70Mb L: 117/117 MS: 1 CopyPart- 00:10:14.264 [2024-07-23 13:52:05.184258] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:14.264 [2024-07-23 13:52:05.184304] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.264 #24 NEW cov: 10764 ft: 16192 corp: 9/533b lim: 120 exec/s: 24 rss: 70Mb L: 73/117 MS: 1 EraseBytes- 00:10:14.523 [2024-07-23 13:52:05.336014] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:14.523 [2024-07-23 13:52:05.336067] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.523 #25 NEW cov: 10771 ft: 16508 corp: 10/606b lim: 120 exec/s: 25 rss: 70Mb L: 73/117 MS: 1 ChangeByte- 00:10:14.523 [2024-07-23 13:52:05.498442] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:14.523 [2024-07-23 13:52:05.498490] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.781 #26 NEW cov: 10771 ft: 16671 corp: 11/679b lim: 120 exec/s: 26 rss: 70Mb L: 73/117 MS: 1 ChangeBinInt- 00:10:14.781 [2024-07-23 13:52:05.660875] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:14.781 [2024-07-23 13:52:05.660925] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:14.781 #27 NEW cov: 10771 ft: 16682 corp: 12/737b lim: 120 exec/s: 13 rss: 70Mb L: 58/117 MS: 1 ChangeByte- 00:10:14.781 #27 DONE cov: 10771 ft: 16682 corp: 12/737b lim: 120 exec/s: 13 rss: 70Mb 00:10:14.781 Done 27 runs in 2 second(s) 00:10:15.348 13:52:06 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-5 00:10:15.348 13:52:06 -- ../common.sh@72 -- # (( i++ )) 00:10:15.348 13:52:06 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:15.348 13:52:06 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:10:15.348 13:52:06 -- vfio/run.sh@22 -- # local fuzzer_type=6 00:10:15.348 13:52:06 -- vfio/run.sh@23 -- # local timen=1 00:10:15.348 13:52:06 -- vfio/run.sh@24 -- # local core=0x1 00:10:15.348 13:52:06 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:15.348 13:52:06 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:10:15.348 13:52:06 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:10:15.348 13:52:06 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:10:15.348 13:52:06 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:10:15.348 13:52:06 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:15.348 13:52:06 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:10:15.348 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:15.348 13:52:06 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:10:15.348 [2024-07-23 13:52:06.148118] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:15.348 [2024-07-23 13:52:06.148204] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921168 ] 00:10:15.348 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.348 [2024-07-23 13:52:06.278313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.607 [2024-07-23 13:52:06.379877] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:15.607 [2024-07-23 13:52:06.380073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.607 INFO: Running with entropic power schedule (0xFF, 100). 00:10:15.607 INFO: Seed: 3675125817 00:10:15.607 INFO: Loaded 1 modules (338583 inline 8-bit counters): 338583 [0x27cc1cc, 0x281ec63), 00:10:15.607 INFO: Loaded 1 PC tables (338583 PCs): 338583 [0x281ec68,0x2d495d8), 00:10:15.607 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:15.607 INFO: A corpus is not provided, starting from an empty corpus 00:10:15.607 #2 INITED exec/s: 0 rss: 61Mb 00:10:15.607 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:15.607 This may also happen if the target rejected all inputs we tried so far 00:10:15.865 [2024-07-23 13:52:06.680281] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:15.865 [2024-07-23 13:52:06.680339] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:16.431 NEW_FUNC[1/638]: 0x483cc0 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:10:16.431 NEW_FUNC[2/638]: 0x486290 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:16.431 #5 NEW cov: 10715 ft: 10690 corp: 2/86b lim: 90 exec/s: 0 rss: 68Mb L: 85/85 MS: 3 ChangeByte-ChangeBit-InsertRepeatedBytes- 00:10:16.431 [2024-07-23 13:52:07.293325] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:16.431 [2024-07-23 13:52:07.293392] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:16.431 NEW_FUNC[1/1]: 0x192aaa0 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:10:16.431 #10 NEW cov: 10753 ft: 13233 corp: 3/145b lim: 90 exec/s: 0 rss: 69Mb L: 59/85 MS: 5 ShuffleBytes-ChangeBit-ChangeBit-ChangeBit-CrossOver- 00:10:16.689 [2024-07-23 13:52:07.476798] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:16.689 [2024-07-23 13:52:07.476848] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:16.689 #11 NEW cov: 10756 ft: 14506 corp: 4/204b lim: 90 exec/s: 0 rss: 70Mb L: 59/85 MS: 1 ChangeBit- 00:10:16.689 [2024-07-23 13:52:07.639370] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:16.689 [2024-07-23 13:52:07.639419] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:16.948 #12 NEW cov: 10756 ft: 14827 corp: 5/261b lim: 90 exec/s: 12 rss: 70Mb L: 57/85 MS: 1 InsertRepeatedBytes- 00:10:16.948 [2024-07-23 13:52:07.801980] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:16.948 [2024-07-23 13:52:07.802028] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:16.948 #13 NEW cov: 10756 ft: 15032 corp: 6/347b lim: 90 exec/s: 13 rss: 70Mb L: 86/86 MS: 1 InsertByte- 00:10:16.948 [2024-07-23 13:52:07.964242] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:16.948 [2024-07-23 13:52:07.964290] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:17.206 #14 NEW cov: 10756 ft: 15255 corp: 7/433b lim: 90 exec/s: 14 rss: 70Mb L: 86/86 MS: 1 ShuffleBytes- 00:10:17.206 [2024-07-23 13:52:08.126847] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:17.206 [2024-07-23 13:52:08.126896] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:17.206 #15 NEW cov: 10756 ft: 15305 corp: 8/492b lim: 90 exec/s: 15 rss: 70Mb L: 59/86 MS: 1 CopyPart- 00:10:17.465 [2024-07-23 13:52:08.279282] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:17.465 [2024-07-23 13:52:08.279331] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:17.465 #16 NEW cov: 10756 ft: 15611 corp: 9/563b lim: 90 exec/s: 16 rss: 70Mb L: 71/86 MS: 1 EraseBytes- 00:10:17.465 [2024-07-23 13:52:08.431665] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:17.465 [2024-07-23 13:52:08.431713] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:17.724 #17 NEW cov: 10763 ft: 16017 corp: 10/650b lim: 90 exec/s: 17 rss: 70Mb L: 87/87 MS: 1 CopyPart- 00:10:17.724 [2024-07-23 13:52:08.583874] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:17.724 [2024-07-23 13:52:08.583921] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:17.724 #18 NEW cov: 10763 ft: 16189 corp: 11/737b lim: 90 exec/s: 9 rss: 70Mb L: 87/87 MS: 1 ChangeBinInt- 00:10:17.724 #18 DONE cov: 10763 ft: 16189 corp: 11/737b lim: 90 exec/s: 9 rss: 70Mb 00:10:17.724 Done 18 runs in 2 second(s) 00:10:18.291 13:52:09 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-6 00:10:18.291 13:52:09 -- ../common.sh@72 -- # (( i++ )) 00:10:18.291 13:52:09 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:18.291 13:52:09 -- vfio/run.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:10:18.291 00:10:18.291 real 0m20.866s 00:10:18.291 user 0m27.935s 00:10:18.291 sys 0m2.389s 00:10:18.291 13:52:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.291 13:52:09 -- common/autotest_common.sh@10 -- # set +x 00:10:18.291 ************************************ 00:10:18.291 END TEST vfio_fuzz 00:10:18.291 ************************************ 00:10:18.291 13:52:09 -- fuzz/llvm.sh@67 -- # [[ 1 -eq 0 ]] 00:10:18.291 00:10:18.291 real 1m30.294s 00:10:18.291 user 2m7.293s 00:10:18.291 sys 0m13.020s 00:10:18.291 13:52:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.291 13:52:09 -- common/autotest_common.sh@10 -- # set +x 00:10:18.291 ************************************ 00:10:18.291 END TEST llvm_fuzz 00:10:18.291 ************************************ 00:10:18.291 13:52:09 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:10:18.291 13:52:09 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:10:18.291 13:52:09 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:10:18.291 13:52:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:18.291 13:52:09 -- common/autotest_common.sh@10 -- # set +x 00:10:18.291 13:52:09 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:10:18.291 13:52:09 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:10:18.291 13:52:09 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:10:18.291 13:52:09 -- common/autotest_common.sh@10 -- # set +x 00:10:22.493 INFO: APP EXITING 00:10:22.493 INFO: killing all VMs 00:10:22.493 INFO: killing vhost app 00:10:22.493 INFO: EXIT DONE 00:10:26.700 Waiting for block devices as requested 00:10:26.700 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:10:26.700 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:26.700 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:26.700 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:26.700 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:26.700 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:26.959 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:26.959 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:26.959 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:27.219 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:27.219 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:27.219 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:27.479 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:27.479 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:27.479 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:27.739 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:27.739 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:34.317 Cleaning 00:10:34.317 Removing: /dev/shm/spdk_tgt_trace.pid3889716 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3887259 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3888399 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3889716 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3890357 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3890587 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3890824 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3891217 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3891477 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3891680 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3891882 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3892101 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3892865 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3895428 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3895810 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3896027 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3896203 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3896610 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3896779 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3897282 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3897359 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3897694 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3897756 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3897962 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3898144 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3898591 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3898793 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3898990 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3899161 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3899434 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3899462 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3899594 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3899817 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3900064 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3900250 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3900446 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3900630 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3900825 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3901018 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3901213 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3901392 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3901596 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3901787 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3902064 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3902287 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3902519 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3902703 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3902905 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3903083 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3903281 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3903465 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3903662 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3903849 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3904044 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3904232 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3904493 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3904702 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3904972 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3905152 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3905353 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3905534 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3905735 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3905913 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3906123 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3906304 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3906505 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3906735 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3907014 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3907239 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3907434 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3907618 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3907818 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3907947 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3908303 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3908846 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3909214 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3909588 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3909956 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3910320 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3910693 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3911060 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3911424 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3911892 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3912366 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3913090 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3913516 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3913919 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3914324 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3914696 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3915058 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3915428 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3915795 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3916155 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3916524 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3916895 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3917256 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3917626 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3917987 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3918355 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3918791 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3919159 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3919531 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3919966 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3920435 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3920797 00:10:34.317 Removing: /var/run/dpdk/spdk_pid3921168 00:10:34.317 Clean 00:10:34.317 killing process with pid 3834643 00:10:36.857 killing process with pid 3834640 00:10:36.857 killing process with pid 3834642 00:10:36.857 killing process with pid 3834641 00:10:36.857 13:52:27 -- common/autotest_common.sh@1436 -- # return 0 00:10:36.857 13:52:27 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:10:36.857 13:52:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:36.857 13:52:27 -- common/autotest_common.sh@10 -- # set +x 00:10:36.857 13:52:27 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:10:36.857 13:52:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:36.857 13:52:27 -- common/autotest_common.sh@10 -- # set +x 00:10:36.857 13:52:27 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:36.857 13:52:27 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:10:36.857 13:52:27 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:10:36.857 13:52:27 -- spdk/autotest.sh@394 -- # hash lcov 00:10:36.857 13:52:27 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=clang == *\c\l\a\n\g* ]] 00:10:36.857 13:52:27 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:36.857 13:52:27 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:10:36.857 13:52:27 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.857 13:52:27 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.857 13:52:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.857 13:52:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.857 13:52:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.857 13:52:27 -- paths/export.sh@5 -- $ export PATH 00:10:36.857 13:52:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.857 13:52:27 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:10:36.857 13:52:27 -- common/autobuild_common.sh@438 -- $ date +%s 00:10:36.857 13:52:27 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721735547.XXXXXX 00:10:36.857 13:52:27 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721735547.kEZMZa 00:10:36.857 13:52:27 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:10:36.857 13:52:27 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:10:36.857 13:52:27 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:10:36.857 13:52:27 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:10:36.857 13:52:27 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:10:36.857 13:52:27 -- common/autobuild_common.sh@454 -- $ get_config_params 00:10:36.857 13:52:27 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:10:36.857 13:52:27 -- common/autotest_common.sh@10 -- $ set +x 00:10:36.857 13:52:27 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:10:36.857 13:52:27 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:10:36.857 13:52:27 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:36.857 13:52:27 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:10:36.857 13:52:27 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:10:36.857 13:52:27 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:10:36.857 13:52:27 -- spdk/autopackage.sh@19 -- $ timing_finish 00:10:36.857 13:52:27 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:10:36.857 13:52:27 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:10:36.857 13:52:27 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:10:36.857 13:52:27 -- spdk/autopackage.sh@20 -- $ exit 0 00:10:36.857 + [[ -n 3791265 ]] 00:10:36.857 + sudo kill 3791265 00:10:36.868 [Pipeline] } 00:10:36.887 [Pipeline] // stage 00:10:36.893 [Pipeline] } 00:10:36.910 [Pipeline] // timeout 00:10:36.916 [Pipeline] } 00:10:36.932 [Pipeline] // catchError 00:10:36.938 [Pipeline] } 00:10:36.958 [Pipeline] // wrap 00:10:36.965 [Pipeline] } 00:10:36.981 [Pipeline] // catchError 00:10:36.991 [Pipeline] stage 00:10:36.993 [Pipeline] { (Epilogue) 00:10:37.005 [Pipeline] catchError 00:10:37.007 [Pipeline] { 00:10:37.020 [Pipeline] echo 00:10:37.021 Cleanup processes 00:10:37.028 [Pipeline] sh 00:10:37.316 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:37.316 3928549 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:37.330 [Pipeline] sh 00:10:37.616 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:37.616 ++ grep -v 'sudo pgrep' 00:10:37.616 ++ awk '{print $1}' 00:10:37.616 + sudo kill -9 00:10:37.616 + true 00:10:37.628 [Pipeline] sh 00:10:37.912 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:10:39.304 [Pipeline] sh 00:10:39.587 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:10:39.587 Artifacts sizes are good 00:10:39.602 [Pipeline] archiveArtifacts 00:10:39.609 Archiving artifacts 00:10:39.723 [Pipeline] sh 00:10:40.010 + sudo chown -R sys_sgci /var/jenkins/workspace/short-fuzz-phy-autotest 00:10:40.026 [Pipeline] cleanWs 00:10:40.036 [WS-CLEANUP] Deleting project workspace... 00:10:40.036 [WS-CLEANUP] Deferred wipeout is used... 00:10:40.044 [WS-CLEANUP] done 00:10:40.045 [Pipeline] } 00:10:40.066 [Pipeline] // catchError 00:10:40.078 [Pipeline] sh 00:10:40.361 + logger -p user.info -t JENKINS-CI 00:10:40.371 [Pipeline] } 00:10:40.388 [Pipeline] // stage 00:10:40.394 [Pipeline] } 00:10:40.411 [Pipeline] // node 00:10:40.417 [Pipeline] End of Pipeline 00:10:40.448 Finished: SUCCESS